Skip to main content

AWS Lambda Function Script Code: Adding Items to DynamoDB Table

Below Lambda function can used to add records to a dynamodb table. It also provides the steps to read a csv file from S3 bucket:


import boto3
import csv 
import json
s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
def lambda_handler(eventcontext):
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']
    print("Bucket="+bucket)
    print("Key="+key)
    if '.csv' not in key:
      return 'Not a csv file'
    s3.download_file(bucket, key, '/tmp/' + key)
    x=1
    with open('/tmp/' + key, 'r'as infile:
        for row in infile:
            if x==1:
                x=2
            else:
                ddb_key = row.strip().split(',')[0]
                ddb_inc1 = row.strip().split(',')[1]
                ddb_inc2 = row.strip().split(',')[2]
                ddb_inc3 = row.strip().split(',')[3]
                ddb_inc4 = row.strip().split(',')[4]
                table = dynamodb.Table('prices_table')
                response = table.put_item(
                    Item={
                        'productId': ddb_key,
                        'product_name': ddb_inc1,
                        'price': ddb_inc2,
                        'sale_price': ddb_inc3,
                        'Code': ddb_inc4    
                    }
                )
                print(response)

Comments

Popular posts from this blog

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...

How to check progress of dbcc shrinkfile

  Query to check progress of dbcc shrinkfile select s.session_id,command,t.text,percent_complete,s.start_time from sys.dm_exec_requests s  CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t where command like '%Dbcc%'

Unix Bash Script

Bash Script that will recursively find all files under a directory and also under sub-directories. It will print all the files and also show the count for the words inside the files. count_words.sh for i in `find $1 -name "*" -type f` do wc -w $i done count_words.sh <directory_name>