Wednesday, 1 May 2024

Dockerizing Python Apps on Windows

Docker is a popular tool in software development and deployment workflows released in 2013. It is a platform that uses OS-level virtualization* to deliver software in packages called containers.

*(a form of virtualization where the operating system kernel allows the existence of multiple isolated user space instances, called containers, to run concurrently on a single host operating system. It shares the host operating system's kernel among the containers, conversely to traditional virtualization where each virtual machine (VM) runs its own separate operating system. For this reason, containers are typically faster to start up and use less memory!).

Docker is used for developing, shipping, and running applications. It utilizes containerization technology to package applications and their dependencies into standardized units called containers which run the application in isolation, away from other processes on the host machine. These containers encapsulate everything needed to run the application, including the code, runtime, libraries, and system tools, ensuring that the application behaves consistently across different environments.

Let’s imagine being part of a development team working on a Node.js application with very specific version requirements. This application needs to be shared with another developer on the team who must run it on their computer. To ensure it runs correctly, they need to set up their development environment to match mine. This involves installing the same version of Node.js, all project dependencies, and configurations such as environment variables. The setup is significant and should be applied to any machine running the same application. Docker and containers were developed to solve this problem.
 
-------------------------------------------------------------------
OBJECTIVE

In this post, I will illustrate how to (1) install Docker on Windows, (2) create an image for a simple Python application, and (3) run the image. 


THE APPLICATION

When given a name, this Python application outputs a tailored greeting message.

def greeting (name):
    phrase = 'Hello, ' + name + '!'
    return phrase    

if __name__ == '__main__':
    name = input('What is your name? ')
    print(greeting(name))

This Python file, called greeting.py, is saved in a folder that I specifically created for this exercise.


INSTALL DOCKER DESKTOP ON WINDOWS

To do this, Docker Desktop should be installed. I am using Windows, so its installation is trickier than the installation on a computer which already has a Linux distribution.

The reason for this is ignored in this post but it might be the subject for another post.

This page contains the download URL, information about system requirements, and instructions on how to install Docker Desktop for Windows. Basically, to run Docker, I have to run a full Linux environment on Windows without a virtual machine. For this, WSL, which is a Windows subsystem for Linux, can be used. To install WSL, instructions can be found here
 
 
DOCKER IMAGES

Docker images are blueprints (read-only) for containers. They are essentially a snapshot of a filesystem that includes everything needed to run an application: the application code, runtime environment, libraries, dependencies, and any additional configurations or commands required. 
 
Once a Docker image is created, it cannot be modified; instead, it must be recreated. When you run a Docker image, a container is created based on that image. Images can typically be shared without significant worries regarding compatibility.

Docker images are organised in layers. Generally, the first layer is the parent image which includes a lightweight OS and a runtime environment. Parent images for Docker can be found in a public repository called Docker Hub. Here we have a list of parent images to choose from and "pull" (download).


DOCKERFILE

A Dockerfile is a text file that contains a set of instructions used to build a Docker image. These instructions specify the steps needed to create a Docker image, including setting up the environment, installing dependencies, copying files into the image, and configuring the container's behaviour.

Each instruction in a Dockerfile roughly translates to an image layer. The order of Dockerfile instructions matters.

How a Dockerfile translates into a stack of layers in a container image (docker.docs).

In the folder where my Python application is, I create a file called "Dockerfile", with the capital "D" and no extension (I use Visual Studio Code with the Docker package installed). A list of commands that can be used in a Dockerfile can be found here.

The Dockerfile for my Python application contains:

# Define the structure of the Docker image. 
# Start from the top.
  
# Use a Python runtime as the base image.
# Found on Docker Hub.
FROM python:3.12-alpine

# Set working directory in the container (app).
# Commands are now executed relative to this dir.  
WORKDIR /app

# Copy the current dir contents into /app.
# First . ->Files in the dir where the Dockerfile is.
    # Source directory (host machine). 
# Second . ->Install files in /app.
    # Destination directory (container).  
COPY . .

# Run the application when the container starts.
CMD ["python", "greeting.py"]

 

DOCKER BUILD

The docker build command compiles the instructions from the Dockerfile to create the Docker image. 

I make sure that Docker Desktop is running on my computer. 

In the same directory where the Dockerfile is, in the terminal, I type docker build -t name_of_the_app .

-t is a flag used to give a name and a tag to the image (tag is "latest" if not specified).

. is the relative path to the Dockerfile (I am already in that directory).

This creates the Docker image which can be found in the Docker Desktop under "Images". From there, I can select the image and run a new container. Then I can select that container and start it.

However, starting the container does not launch the Python application. In fact, under "Logs" for that specific container in the Docker Desktop, I found an EOFError: EOF related to the part of the code which asks for an input (the user's name). In the context of running a Docker container, this error may occur if there is no interactive terminal available for the container to accept user input. When running a Docker container, the default behaviour is non-interactive, meaning it does not allow for interactive input from the user. The container for this image must be recreated. Read down on how to solve this.


DOCKER RUN VIA CLI

docker images -> list all available images.

docker run -it --name container_1 name_of_the_app -> to run a container named "container1" from the "name_of_the_app" image.

-it is used to

  • allocate a pseudo-TTY which enables terminal-like features such as displaying output and accepting input (t).
  • enable interactivity (i). Indeed, the "-i" flag instructs Docker to attach STDIN for the container, allowing me to interact with it. I required the user to insert a name as the input for the Python application so interactivity is needed. This solves  the EOFError related to the input.

With this, the Python application runs on the terminal no problem as soon as I create the container. If I want to run it again, I need to re-start the container.

docker ps -> list all running containers.

docker ps -a -> list all containers.

docker stop container_1 -> to stop "container_1".

docker start container_1 -> to start a "container_1" which was stopped.

In my case, when I start the container again, I need to make it interactive, so I have to type:  

docker start -i container_1

No need to enable terminal-like features with the "-t" flag again, since these features have been embedded in the container when I first created it through the CLI.    

However, stopping and restarting a container to run an application which is on it is not functional because it would reset the container's state, potentially losing any changes made during the container's previous execution.

Alternatively, I can use

docker exec -it container_1 python greeting.py 

to execute a command inside a running container without restarting it. Now, when my container is running, I can execute the Python application every time I want!

In the writing of this post, I ignored .dockerignore and Volumes, which might become the subject for another post.

Monday, 1 April 2024

S3 File Editor App: Integrating Python with AWS

In this post, I illustrate how I tackled the problem of making a specific application in Python that would communicate with AWS services. 

You can find the code in this repository: https://github.com/gianlucaballa/s3-file-editor

Task: Create an application that allows users to effortlessly share and modify written information such as materials, quantities, and important notes with one another.

The application needs to meet the following criteria:

  • Any user should be able to open the application on a pc with a double-click.
  • Upon opening the application, users should have immediate access to a simple text file for viewing, editing, and saving (structure for the text file is not necessary – a blank text file suffices).
  • The user can read, modify, and save the text file for other users to use.

This application will be specifically used by building engineers to easily share information such as material to buy, quantity and similar notes regarding construction sites. 

No overheads: ease of use is vital here.

Key criteria: accessibility, seamless file handling, collaboration. 

----------------------------------------------------------------------------------------------------------------------

My solution:

At first, I thought about using RDS in AWS and creating a simple table with 3 columns for a MySQL database. I steered away from this solution because it would have over complicated the code for the execution of SQL commands, without producing any significant benefit.

Instead, I proceeded with the following steps:

  1. I created an S3 bucket in AWS using a specific user with specific permissions and credentials.
  2. I uploaded an empty text file with a specific name to the S3 bucket.
  3. I wrote a Python application using boto3*.

*(Boto3 is the official AWS SDK for Python. It provides an easy-to-use Python interface to interact with various AWS services such as S3. With Boto3, developers can programmatically manage AWS resources, automate tasks, and build applications that leverage AWS services without needing to manually configure API requests). 

This solution showed to be effective and satisfied the established key criteria, as it will be demonstrated below.

----------------------------------------------------------------------------------------------------------------------

Explanation:

By sharing and analysing the Python code that I wrote for the application (see point 3. above), you can better understand the operation of the application that I designed.

    The code (formatted with Black):

import boto3
import os


def download_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.download_file(bucket_name, key, local_filename)


def upload_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.upload_file(local_filename, bucket_name, key)


def edit_file(local_filename):
    os.system(f'notepad "{local_filename}"')


def main():
    bucket_name = "insert_bucket_name"
    key = "insert_file_name.txt"
    local_filename = "temp_file.txt"

    # Download the file from S3.
    download_file(bucket_name, key, local_filename)

    # Allow the user to edit the file.
    edit_file(local_filename)

    # Upload the modified file back to S3.
    upload_file(bucket_name, key, local_filename)

    # Clean up the temporary file.
    os.remove(local_filename)

    print("Success!")


if __name__ == "__main__":
    main()

import boto3
import os

These import the "boto3" module and the "os" module. The "os"  module provides a way of using the operating system dependent functionality. For example, it allows the application to execute a command in the system's shell.
 
def download_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.download_file(bucket_name, key, local_filename)
 
This function called download_file takes 3 arguments. 
Once called, it creates an S3 client object, in other words, an interface through which your Python code can communicate with Amazon S3. In order to do this, it is necessary to specify the region in which the S3 exists, the AWS access key and secret access key of the AWS user (see point 1. above).
 
Hardcoding keys is never a good idea because they can be seen by anyone looking at the code! I hardcoded them to test the application.
 
Then the download_file function downloads the file from the specified S3 bucket (bucket_name) with the specified key/name of the file (key) to the local file (local_filename) on the file system using the function (usually the local file is where the file with the code is on the pc). s3.download_file(bucket_name, key, local_filename) is a specific function provided by Boto3.
 
def upload_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.upload_file(local_filename, bucket_name, key)
 
This is a similar function but now it uploads the local file (local_filename) on the file system using the function to the specified S3 bucket (bucket_name) with the specified key/name of the file (key). s3.upload_file(local_filename, bucket_name, key) is a specific function provided by Boto3.
 
def edit_file(local_filename):
    os.system(f'notepad "{local_filename}"')
 
The edit_file function opens the specified file (local_filename) in the default text editor of the system using the os.system function. Since I am testing this application in Windows, I specified "notepad" as the default text editor.
 
def main():
    bucket_name = "insert_bucket_name"
    key = "insert_file_name.txt"
    local_filename = "temp_file.txt"
 
    # Download the file from S3.
    download_file(bucket_name, key, local_filename)

    # Allow the user to edit the file.
    edit_file(local_filename)

    # Upload the modified file back to S3.
    upload_file(bucket_name, key, local_filename)

    # Clean up the temporary file.
    os.remove(local_filename)

    print("File has been updated and uploaded to S3.")
 
In Python, main() is a conventional name for the main entry point of a script or program. It's a function that typically contains the main logic or sequence of actions that the script should perform when executed.The purpose of defining a main() function is to encapsulate the main functionality of the script and to provide a clear starting point for the program's execution. This makes the code more modular and easier to understand, as the main logic is isolated within a specific function. 
 
In this case, main() is used to call the functions that I defined above in a specific order (download, edit, upload), clean up the temporary file on the pc once it is uploaded to S3 (using os.remove(local_filename)) and print a message once the process is concluded. In main(), I also establish variables that will be used as arguments for the functions: the name of the S3 bucket (see point 1. above), the name of the .txt file that I initially uploaded to S3 (see point 2. above) and the local file name (temp_file.txt works fine for this).

if __name__ == "__main__":
    main()

This final block ensures that the main() function is executed only when the script is run directly, not when it's imported as a module into another script. This allows the script to be used both as a stand-alone program and as a reusable module. 
 
Final notes:
  • Ensure that the IAM user associated with the AWS credentials has the necessary permissions to access the specified S3 bucket and perform read/write operations on the objects.
  • Make sure to handle sensitive information such as AWS access keys securely.
  • This script is intended for educational purposes and can be modified to suit specific requirements. 

----------------------------------------------------------------------------------------------------------------------

To transform the script into a clickable .exe, I can use

pyinstaller --onefile my_script_name.py
After PyInstaller finishes, it will create a "dist" directory in the script's directory, containing the executable file. This executable can be distributed to others.

Friday, 1 March 2024

Adapting Visual References in Concept Art For Monster Design

(This is a short version of one of my academic articles that you can find here).

There is efficacy in adapting visual references (3D renders and photos) to concept art for high-budget game development and film production.

For example, embedding visual references into an artwork for time efficiency, correct use of perspective and establishment of believable textures is significant in a video game triple-A development. The search for realism enhanced by the use of visual references also shows to be advantageous in designing uncanny monsters. 

Visual references can be manipulated in software such as Photoshop to prepare not only the blueprints for the 3D modelling/sculpting stage but also to design the special effects makeup for live-action monsters and animatronics. 

There is a gap between our current understanding of the uncanny valley, as it is defined in robotics, and the process of designing characters. Investigating this subject is important for a dual reason:

  1. It moves knowledge forward in the field of the uncanny valley’s applications to concept art, since this has not been investigated in depth.
  2. It helps professional concept artists in shaping and controlling the uncanniness of antagonistic characters.

Analyses on industrial practice showed that:

  • The use of photorealistic references to be implanted directly into concept art is not a mandatory request for all game developments and digital effects film productions but it is increasingly encouraged for certain products such as triple-A games because it reduces hiccups in the workflow: indeed, this practice spares the artist from hand painting texture details through brush strokes and minimizes perspective mistakes.
  • When it comes to designing uncanny monsters, the use of these references pushes the design towards realism facilitating the triggering of the uncanny valley phenomenon at a design level – as highlighted by the current literature on the subject. The horror genre often favours rich textures (e.g., organic material such as blood, filth, rustiness, etc.) not only in characters but environments too – this adds to the subtle subversion of the concept of affinity which is at the centre of the uncanny valley and it is commonly used in horrors to make the viewer feel vulnerable. 

In the light of this, the adaptation of photorealistic visual references might represent a strong starting point for concept artists who aim to trigger the "uncanniness" through character design. This type of reference should be directly inserted into the work and then transformed through deformation tools, photo bashing and minimal painting. The use of "previs", in collaboration with the animation department, can implement the monster’s movements, since these contribute to the uncanniness of the creature. 

A study on the finalisation of the sculpting and modelling work which analyses how to convey the texture of the uncanny monster in a three-dimensional environment (e.g., UV mapping) would represent a natural development for a research which aims to formalise a specific industrial pipeline. 

Furthermore, an investigation into the role of scriptwriting and sound on making uncanny monsters for films and games would be a valuable supplement because it could make clearer how the integration of visuals, narrative and audio influences the design of the uncanny monster. Study on sound in particular would offer significant insight that can be applied in robotics.

Generative AI – like Midjourney, Stable Diffusion and OpenAI’s DALL-E – might bring a significant change to contemporary practice. In fact, the use of AI has already demonstrated to be powerful in testing ideas, establishing colour palettes and producing suggestive photographic references with a high degree of realism. Because of this and their impact on the concept art practice, generative AI applied to this subject requires further investigations.

Photo by Pavel Danilyukyuk: https://www.pexels.com/photo/close-up-shot-of-white-robot-toy-8294606/Photo by Pavel Danilyuk: https://www.pexels.com/photo/close-up-shot-of-white-robot-toy-8294606/

 

Tuesday, 20 February 2024

Terraform: Launching an EC2 Instance in AWS

Terraform is an infrastructure as code (IaC) tool that lets a user define both cloud and on-prem resources in human-readable configuration files to version, reuse, and share. In other words, it allows users to define and provision infrastructure resources, such as virtual machines, storage accounts, and networks, in a declarative configuration language. 

For example, on Amazon Web Services (AWS), it would be lengthy to use the console to manually build an infrastructure. Terraform makes it faster and easier by providing a consistent and reproducible way to define, provision, and manage resources on AWS through a programmatic approach.

The key components of Terraform include:
  • Configuration files: These files define the desired state of your infrastructure, specifying the resources and their configurations.
  • Providers: Terraform providers are plugins that interact with APIs of different cloud providers (such as AWS, Azure, Google Cloud Platform, etc.) or other services to manage resources.
  • Resource types: Terraform supports a wide range of resource types, representing various infrastructure components like virtual machines, networks, databases etc.
  • Execution plans: Terraform generates execution plans to show what actions it will take when you apply your configuration, giving you a preview of changes before they are implemented.
  • State management: Terraform maintains a state file that keeps track of the current state of your infrastructure. This allows Terraform to understand the relationships between resources and manage updates efficiently.
The following tutorial aims to show how to create a basic infrastructure: we will provision an EC2 instance on AWS. EC2 instances are virtual machines running on AWS, and a common component of many infrastructure projects. EC2 instances can be configured with various CPU, memory, storage, and networking options to meet different workload requirements. EC2 is widely used for hosting websites, running applications, processing data etc.

Requirements. For this tutorial, it is assumed that:
  1. You have installed Terraform on your machine.
  2. You know a bit of AWS.
  3. You have installed AWS CLI.
  4. You already have an account with AWS*.
*(Create an IAM user for this exercise rather than using the Root account. Give it the right permissions, depending on what you want to create through Terraform – an EC2 instance in this case).

I used Studio Visual Code as source-code editor (with the official Terraform extension) and GitHub.
----------------------------------------------------------------------------------------------------------------------

Integration of Cloud Technologies with the Metaverse

The potential impact and timeline for the development of the Metaverse remain uncertain, with ongoing debate over whether it represents a me...