Monday 1 April 2024

S3 File Editor App: Integrating Python with AWS

In this post, I illustrate how I tackled the problem of making a specific application in Python that would communicate with AWS services. 

You can find the code in this repository: https://github.com/gianlucaballa/s3-file-editor

Task: Create an application that allows users to effortlessly share and modify written information such as materials, quantities, and important notes with one another.

The application needs to meet the following criteria:

  • Any user should be able to open the application on a pc with a double-click.
  • Upon opening the application, users should have immediate access to a simple text file for viewing, editing, and saving (structure for the text file is not necessary – a blank text file suffices).
  • The user can read, modify, and save the text file for other users to use.

This application will be specifically used by building engineers to easily share information such as material to buy, quantity and similar notes regarding construction sites. 

No overheads: ease of use is vital here.

Key criteria: accessibility, seamless file handling, collaboration. 

----------------------------------------------------------------------------------------------------------------------

My solution:

At first, I thought about using RDS in AWS and creating a simple table with 3 columns for a MySQL database. I steered away from this solution because it would have over complicated the code for the execution of SQL commands, without producing any significant benefit.

Instead, I proceeded with the following steps:

  1. I created an S3 bucket in AWS using a specific user with specific permissions and credentials.
  2. I uploaded an empty text file with a specific name to the S3 bucket.
  3. I wrote a Python application using boto3*.

*(Boto3 is the official AWS SDK for Python. It provides an easy-to-use Python interface to interact with various AWS services such as S3. With Boto3, developers can programmatically manage AWS resources, automate tasks, and build applications that leverage AWS services without needing to manually configure API requests). 

This solution showed to be effective and satisfied the established key criteria, as it will be demonstrated below.

----------------------------------------------------------------------------------------------------------------------

Explanation:

By sharing and analysing the Python code that I wrote for the application (see point 3. above), you can better understand the operation of the application that I designed.

    The code (formatted with Black):

import boto3
import os


def download_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.download_file(bucket_name, key, local_filename)


def upload_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.upload_file(local_filename, bucket_name, key)


def edit_file(local_filename):
    os.system(f'notepad "{local_filename}"')


def main():
    bucket_name = "insert_bucket_name"
    key = "insert_file_name.txt"
    local_filename = "temp_file.txt"

    # Download the file from S3.
    download_file(bucket_name, key, local_filename)

    # Allow the user to edit the file.
    edit_file(local_filename)

    # Upload the modified file back to S3.
    upload_file(bucket_name, key, local_filename)

    # Clean up the temporary file.
    os.remove(local_filename)

    print("File has been updated and uploaded to S3.")


if __name__ == "__main__":
    main()

import boto3
import os

These import the "boto3" module and the "os" module. The "os"  module provides a way of using the operating system dependent functionality. For example, it allows the application to execute a command in the system's shell.
 
def download_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.download_file(bucket_name, key, local_filename)
 
This function called download_file takes 3 arguments. 
Once called, it creates an S3 client object, in other words, an interface through which your Python code can communicate with Amazon S3. In order to do this, it is necessary to specify the region in which the S3 exists, the AWS access key and secret access key of the AWS user (see point 1. above).
 
Hardcoding keys is never a good idea because they can be seen by anyone looking at the code! I hardcoded them to test the application.
 
Then the download_file function downloads the file from the specified S3 bucket (bucket_name) with the specified key/name of the file (key) to the local file (local_filename) on the file system using the function (usually the local file is where the file with the code is on the pc). s3.download_file(bucket_name, key, local_filename) is a specific function provided by Boto3.
 
def upload_file(bucket_name, key, local_filename):
    s3 = boto3.client(
        "s3",
        region_name="insert_region",
        aws_access_key_id="insert_id",
        aws_secret_access_key="insert_secret",
    )
    s3.upload_file(local_filename, bucket_name, key)
 
This is a similar function but now it uploads the local file (local_filename) on the file system using the function to the specified S3 bucket (bucket_name) with the specified key/name of the file (key). s3.upload_file(local_filename, bucket_name, key) is a specific function provided by Boto3.
 
def edit_file(local_filename):
    os.system(f'notepad "{local_filename}"')
 
The edit_file function opens the specified file (local_filename) in the default text editor of the system using the os.system function. Since I am testing this application in Windows, I specified "notepad" as the default text editor.
 
def main():
    bucket_name = "insert_bucket_name"
    key = "insert_file_name.txt"
    local_filename = "temp_file.txt"
 
    # Download the file from S3.
    download_file(bucket_name, key, local_filename)

    # Allow the user to edit the file.
    edit_file(local_filename)

    # Upload the modified file back to S3.
    upload_file(bucket_name, key, local_filename)

    # Clean up the temporary file.
    os.remove(local_filename)

    print("File has been updated and uploaded to S3.")
 
In Python, main() is a conventional name for the main entry point of a script or program. It's a function that typically contains the main logic or sequence of actions that the script should perform when executed.The purpose of defining a main() function is to encapsulate the main functionality of the script and to provide a clear starting point for the program's execution. This makes the code more modular and easier to understand, as the main logic is isolated within a specific function. 
 
In this case, main() is used to call the functions that I defined above in a specific order (download, edit, upload), clean up the temporary file on the pc once it is uploaded to S3 (using os.remove(local_filename)) and print a message once the process is concluded. In main(), I also establish variables that will be used as arguments for the functions: the name of the S3 bucket (see point 1. above), the name of the .txt file that I initially uploaded to S3 (see point 2. above) and the local file name (temp_file.txt works fine for this).

if __name__ == "__main__":
    main()

This final block ensures that the main() function is executed only when the script is run directly, not when it's imported as a module into another script. This allows the script to be used both as a stand-alone program and as a reusable module. 
 
Final notes:
  • Ensure that the IAM user associated with the AWS credentials has the necessary permissions to access the specified S3 bucket and perform read/write operations on the objects.
  • Make sure to handle sensitive information such as AWS access keys securely.
  • This script is intended for educational purposes and can be modified to suit specific requirements. 

----------------------------------------------------------------------------------------------------------------------

To transform the script into a clickable .exe, I can use

pyinstaller --onefile my_script_name.py
After PyInstaller finishes, it will create a "dist" directory in the script's directory, containing the executable file. This executable can be distributed to others.

Friday 1 March 2024

Adapting Visual References in Concept Art For Monster Design

(This is a short version of one of my academic articles that you can find here).

There is efficacy in adapting visual references (3D renders and photos) to concept art for high-budget game development and film production.

For example, embedding visual references into an artwork for time efficiency, correct use of perspective and establishment of believable textures is significant in a video game triple-A development. The search for realism enhanced by the use of visual references also shows to be advantageous in designing uncanny monsters. 

Visual references can be manipulated in software such as Photoshop to prepare not only the blueprints for the 3D modelling/sculpting stage but also to design the special effects makeup for live-action monsters and animatronics. 

There is a gap between our current understanding of the uncanny valley, as it is defined in robotics, and the process of designing characters. Investigating this subject is important for a dual reason:

  1. It moves knowledge forward in the field of the uncanny valley’s applications to concept art, since this has not been investigated in depth.
  2. It helps professional concept artists in shaping and controlling the uncanniness of antagonistic characters.

Analyses on industrial practice showed that:

  • The use of photorealistic references to be implanted directly into concept art is not a mandatory request for all game developments and digital effects film productions but it is increasingly encouraged for certain products such as triple-A games because it reduces hiccups in the workflow: indeed, this practice spares the artist from hand painting texture details through brush strokes and minimizes perspective mistakes.
  • When it comes to designing uncanny monsters, the use of these references pushes the design towards realism facilitating the triggering of the uncanny valley phenomenon at a design level – as highlighted by the current literature on the subject. The horror genre often favours rich textures (e.g., organic material such as blood, filth, rustiness, etc.) not only in characters but environments too – this adds to the subtle subversion of the concept of affinity which is at the centre of the uncanny valley and it is commonly used in horrors to make the viewer feel vulnerable. 

In the light of this, the adaptation of photorealistic visual references might represent a strong starting point for concept artists who aim to trigger the "uncanniness" through character design. This type of reference should be directly inserted into the work and then transformed through deformation tools, photo bashing and minimal painting. The use of "previs", in collaboration with the animation department, can implement the monster’s movements, since these contribute to the uncanniness of the creature. 

A study on the finalisation of the sculpting and modelling work which analyses how to convey the texture of the uncanny monster in a three-dimensional environment (e.g., UV mapping) would represent a natural development for a research which aims to formalise a specific industrial pipeline. 

Furthermore, an investigation into the role of scriptwriting and sound on making uncanny monsters for films and games would be a valuable supplement because it could make clearer how the integration of visuals, narrative and audio influences the design of the uncanny monster. Study on sound in particular would offer significant insight that can be applied in robotics.

Generative AI – like Midjourney, Stable Diffusion and OpenAI’s DALL-E – might bring a significant change to contemporary practice. In fact, the use of AI has already demonstrated to be powerful in testing ideas, establishing colour palettes and producing suggestive photographic references with a high degree of realism. Because of this and their impact on the concept art practice, generative AI applied to this subject requires further investigations.

Photo by Pavel Danilyukyuk: https://www.pexels.com/photo/close-up-shot-of-white-robot-toy-8294606/Photo by Pavel Danilyuk: https://www.pexels.com/photo/close-up-shot-of-white-robot-toy-8294606/

 

Tuesday 20 February 2024

Terraform: Launching an EC2 Instance in AWS

Terraform is an infrastructure as code (IaC) tool that lets a user define both cloud and on-prem resources in human-readable configuration files to version, reuse, and share. In other words, it allows users to define and provision infrastructure resources, such as virtual machines, storage accounts, and networks, in a declarative configuration language. 

For example, on Amazon Web Services (AWS), it would be lengthy to use the console to manually build an infrastructure. Terraform makes it faster and easier by providing a consistent and reproducible way to define, provision, and manage resources on AWS through a programmatic approach.

The key components of Terraform include:
  • Configuration files: These files define the desired state of your infrastructure, specifying the resources and their configurations.
  • Providers: Terraform providers are plugins that interact with APIs of different cloud providers (such as AWS, Azure, Google Cloud Platform, etc.) or other services to manage resources.
  • Resource types: Terraform supports a wide range of resource types, representing various infrastructure components like virtual machines, networks, databases etc.
  • Execution plans: Terraform generates execution plans to show what actions it will take when you apply your configuration, giving you a preview of changes before they are implemented.
  • State management: Terraform maintains a state file that keeps track of the current state of your infrastructure. This allows Terraform to understand the relationships between resources and manage updates efficiently.
The following tutorial aims to show how to create a basic infrastructure: we will provision an EC2 instance on AWS. EC2 instances are virtual machines running on AWS, and a common component of many infrastructure projects. EC2 instances can be configured with various CPU, memory, storage, and networking options to meet different workload requirements. EC2 is widely used for hosting websites, running applications, processing data etc.

Requirements. For this tutorial, it is assumed that:
  1. You have installed Terraform on your machine.
  2. You know a bit of AWS.
  3. You have installed AWS CLI.
  4. You already have an account with AWS*.
*(Create an IAM user for this exercise rather than using the Root account. Give it the right permissions, depending on what you want to create through Terraform – an EC2 instance in this case).

I used Studio Visual Code as source-code editor (with the official Terraform extension) and GitHub.
----------------------------------------------------------------------------------------------------------------------

Docker (provisional title)

 (In development).