28Oct
Amazon S3 Cloud Storage Proxying Through NodeJS from Angular Frontend Securely
Amazon S3 Cloud Storage Proxying Through NodeJS from Angular Frontend Securely

This article is about securely uploading a file, i.e an image or profile picture, to the Amazon S3 Cloud Storage without exposing any security breach through JSON Web Authentication and Securing the Upload through a Proxy NodeJS Server which is always well guarded in the backend.

For JSON Web Authentication using Angular, please refer to the following article: JSON Web Authentification with Angular 8 and NodeJS

At the end of the article, you should have learned:

  • Transmitting File as FormData through API PUT Call
  • Using Multer to Cache Files in the Memory for Proxying
  • Configuring Policy Screening for AWS S3 Cloud Storage
  • Configuring AWS S3 Integration into NodeJS
  • Transmitting Files using Backend Proxy Process using NodeJS
  • IAM – Identity Access Management Principles

Amazon Services Configuration

To register for an AWS Services Account, go the following link: https://aws.amazon.com/console/

Amazon Services Configuration
Amazon Services Configuration

Once you are registered, sign in to your account: you will have the AWS Services Dashboard.

AWS Services Dashboard
AWS Services Dashboard

Section 1: Service Selection

This menu is the complete list of all Amazon Services available at your disposal, you should always keep checking it (it is also searchable), AWS keeps updating it.

Section 2: Region

Always keep checking the region in which you are operating. We will be using IAM, which is global, but in the case of S3, we will need to define our region.

From the services tab search for IAM, which stands for Identity Access Management, and open it up.

Identity Access Management
Identity Access Management

What is IAM?

IAM for Identity Access Management is where you will be creating users that will have access to the services. These users can be limited or given full access depending on your requirements. To give specific rights to a user is to create a policy for that user. A policy that is accessible in the IAM defines whether specific service attributes are allowed or should be denied to the user. You can define Inline policies that remain to a specific user, or you can create custom policies that you can apply to multiple users. This is a very neat system of giving multiple permissions to multiple users.

Let’s create a user: follow through the navigation, click the ‘user,’ and then click ‘add a user.’

IAM: Add user
IAM: Add user
IAM: Name user
IAM: Name user

We will name our user s3_access_user. Remember to check Programmatic access.

What is Programmatic Access?

Programmatic Access allows a user to communicate with any AWS services using an access key and a secret. This is the best practice method for security in your application. Each time you allow programmatic access to a user, an access id and a secret key are generated. The secret key can only be copied or read once after which, if you have lost the key, you will need to regenerate a new one. It is only practical to invoke the previously generated key by deleting it. It is a powerful way of managing your code to communicate with AWS cloud services.

Click on Next and you will come across groups. We will not be needing any groups but you can either create a group at this screen, or you can add a user to a current group that you have created earlier. Groups are beneficial when you need to assign a custom policy or more than one policy to a set of users.

Setting permissions
Setting permissions

Click next and move on to the creating of the user.

Access key id
Access key id

Now, once you have completed the process, the access key id and the secret access key will appear. You can click on ‘show’ to show the secret key, as both of the keys are needed for the authentication into the AWS services. We have successfully created our new user and given him programmatic access.

Next, go back to the user’s dashboard, click on the user, and copy the ARN of that user.

What is ARN?

ARN stands for Amazon Resource Name. It is a naming convention adopted by Amazon so that its resources created in-between its services can be individually contacted for specific key operations. It is helpful to uniquely take apart a key element in Amazon and do an operation on it.

S3 Bucket Creation

Now go to the services menu on the upper right and type in S3, click on it so we can get to the Amazon S3 dashboard.

What is S3?

S3 stands for Amazon Simple Storage Service. AWS S3 is a file storage service provided by Amazon in its cloud architecture. It utilizes the cloud and its powerful feature by being scalable. S3 uses object-based storage, in which data is stored as objects. Each object has the data, metadata, and a unique identifier. S3 employs the concept of a bucket as opposed to a drive, therefore each time you create a bucket you are allotting a drive with a unique identifier. The bucket aka drive can now have folders and folders of hierarchy in it. Let’s create a bucket by clicking ‘Create Bucket:’

S3 Buckets
S3 Buckets

Type in a bucket name but remember that the bucket name has to be more unique, as the S3 service makes the name act as a DNS service for it to be available on the cloud. Next, you can also select the region in which you want the bucket, for now, we will keep it US-East (N. Virginia) which is known as us-east-1.

Creating bucket
Creating bucket

Click on ‘next’, keep everything default, and finish the creation of the bucket.

Creating bucket 2
Creating bucket 2

The bucket will immediately be available on your dashboard. Click on the soshace-s3-tutorial and let’s configure some settings.

soshace-s3-tutorial
soshace-s3-tutorial
Bucket permissions
Bucket permissions

Once you’re in the Bucket, go into the permissions tab, click on ‘edit,’ and uncheck ‘Block all public access.’ Then click on ‘save.’

Through this, we are allowing the public to access this bucket from the internet. This public access is only through name resolution.  Next, go to the permissions tab and click on ‘Bucket Policy’ and paste the following policy that we have created.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::soshace-s3-tutorial/*",
                "arn:aws:s3:::soshace-s3-tutorial"
            ]
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::981984658744:user/s3_access_user"
            },
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::bloodbillions-public/*"
        }        
    ]
}

This is what a policy definition seems like. It is in JSON. The version is the date at which the policy library has been in effect and operational. Due to Amazon always updating and upgrading its services, it is always helpful to keep track of what version to use. Our first permission role attached to this policy starts by the ‘Effect’ tag which allows an operation to be conducted on this bucket. The principal tag allows you to attach a user to this specific policy. Amazon ecosystem can work two ways, one you can attach a policy through IAM(Identity Access Management), second certain set of services like Amazon S3 can be used to attach a policy through the service itself. All policies are attached through ARN (Amazon Resource Name) which is unique and generated for all services that you create. The action is the key operation and in this case, we only want the user to be able to read the file if it has the complete URL. This also allows us to restrict the directory listing of the bucket, which is again the best security practice considered. The resource tag is where we specify the unique identifier for the service, i.e its ARN (Amazon Resource Name), which is this case is ‘arn:aws:s3:::bloodbillions-public/’. The star (*) at the end is a wildcard which allows all files that proceed the path to be given a public access.  The second instance of the policy is self-defining itself, as it only allows the user to only upload i.e, in this case, put an object, which is a file, to this bucket. The user has been identified by using the ARN that we have saved earlier. Next, click on ‘save,’ and you will have the orange public tag appear.

Successful configuration of AWS
Successful configuration of AWS

We have successfully configured our Amazon Cloud Service to be securely readable only through URLs and disabled directory listing. We have also added secured programmatic access to the storage through access id and secret key.

NodeJS Server

Now let’s start with the NodeJS server. Create a new directory by the name of node_server and initialize a node project.

mkdir S3_NodeJS_Angular
cd S3_NodeJS_Angular
mkdir node_server
cd node_server
npm init -y
npm i -S express express-jwt multer aws-sdk jsonwebtoken body-parser http
touch app.js

Now let’s look at the code for app.js:

//Importing Libraries
const express       = require('express')
const expressJWT    = require('express-jwt');
const jwt           = require('jsonwebtoken');
const AWS           = require('aws-sdk');
const multer        = require('multer')

//Initialising Express and Port
const app           = express();
const serverPort    = 8081;
const http          = require('http').Server(app)

//Initialising Multer to Memory Storage
var storage         = multer.memoryStorage()
var upload          = multer({storage: storage});

//JWT Secret Key
var jwt_secret      = 'yoursecretkeyword';

//Intialize Amazon S3
const s3 = new AWS.S3({
    accessKeyId: 'your_aws_iam_access_key',
    secretAccessKey: 'your_aws_iam_secret',
    region : 'us-east-1',
    signatureVersion: "v4"
});

//Opening Unauthenticated Paths
app.use(expressJWT({ secret: jwt_secret })
    .unless(
        { path: [
           '/',
           /\/public*/
        ]}
    ));

//Root Path
app.get('/', (req, res) => {
    res.json("Welcome to S3 NodeJS Angular Tutorial");
});

//Return JWT Token
app.get('/public/token', (req, res) => {
    var sample_user = {
        "id":       '12345',
        "name":     'soshace_user'
    }          
    
    //Generate Token
    let token = jwt.sign(sample_user, jwt_secret, { expiresIn: '60m'});

    //Send Back Token
    res.json({
        'success': true,
        'token': token
    });
});

//Upload File
app.put('/public/upload', upload.single("image"), function (req, res) {
    var params = {
        Bucket: "soshace-s3-tutorial",
        Key: req.file.originalname,
        Body: req.file.buffer,
        ContentType: req.file.mimetype
    }    

    s3.upload(params).send((err, data) => {
        if (err) {
            res.json({
                'success': false,
                "error": err
            });                    
        }
        res.json({
            'success': true,
            'img_url': data
        });
    }); 
});

//Starting Server Listen
http.listen(serverPort, function() {
    console.log("Listening to " + serverPort);
});

Let’s run the application and test an image upload through Postman. To run the application run it as:

node app.js
Running an app
Running an app

Next open Postman, so we can test our API endpoint. You can easily set the tags by looking at the image below. For postman to be able to upload an image it has to be a form-data, then when you specify the file tag, a dropdown appears that allows you to switch from text to file. Next in the value field, a browse button will appear which will allow you to select a file.

Postman
Postman

Once you click ‘send,’ it will pass the file to the NodeJS script which will, in turn, access the AWS S3 Bucket using the Authenticated user method through the access key, and the secret, once authenticated, will act as a proxy and upload the file to the AWS S3 and forward the response back to the API endpoint call.

 

File upload
File upload

Great news, we have successfully uploaded the file through the API endpoint to the AWS S3 Bucket.

Everything else is plainly commented with the explanation, let’s examine the put request at “/public/upload”

//Upload File
app.put('/public/upload', upload.single("image"), function (req, res) {
    var params = {
        Bucket: "soshace-s3-tutorial",
        Key: req.file.originalname,
        Body: req.file.buffer,
        ContentType: req.file.mimetype
    }    

    s3.upload(params).send((err, data) => {
        if (err) {
            res.json({
                'success': false,
                "error": err
            });                    
        }
        res.json({
            'success': true,
            'img_url': data
        });
    }); 
});

The s3.upload object requires three parameters, among which are the bucket name where it will be uploaded and the key with the file name, in this case, we are using the original filename. Remember, all fine names have to be unique, therefore files with the same name will be overwritten, so to keep this sorted out, you should append an epoch for the current date time to it. The body will require the complete file buffer, and the content type will require the extension of the file. All can be retrieved from the request object.

Angular 2+ Part

HTML Side

<input type="file" (change)="selectFile($event)">
<button class="btn btn-warning btn-sm" (click)="upload()">Upload Photo</button>

TypeScript Side

selectFile(event) {
  this.selectedFiles = event.target.files;
}

upload() {
    this.currentFileUpload = this.selectedFiles.item(0);
    const formData: FormData = new FormData();
    formData.append('image', this.currentFileUpload, this.currentFileUpload.name);
    this.http.put(upload_url, formData)
    .subscribe(
      res => {
        if (res['success']) {
          console.log(res['img_url']);
          /*
           "img_url": {
            "ETag": "\"b304ee929b06d3275f5199662ae098dd\"",
            "Location": "https://soshace-s3-tutorial.s3.amazonaws.com/download.jpg",
            "key": "download.jpg",
            "Key": "download.jpg",
            "Bucket": "soshace-s3-tutorial"
           }
          */
        }
      },
      err => {
        console.log(err);
      }
    )   
}

Github: https://github.com/th3n00bc0d3r/Amazon-S3-Bucket-NodeJS-Angular

References:
https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/
https://en.wikipedia.org/wiki/Amazon_S3
https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html
https://aws.amazon.com/iam/
https://searchaws.techtarget.com/definition/Amazon-Resource-Name-ARN

Angular 2, part 1

Hi folks! Today we want to talk about one cool and growing technology its name – Angular 2. What is more interesting that it has no stable version yet, but many companies and developers have already using it for doing projects.

Leave a Reply