View on GitHub

VoiceSearchPhotoAlbum

Implemented a photo album web application, that can be searched using natural language through both text and voice. Created an intelligent search layer to query your photos for people, objects, actions, landmarks and more.

AWS Services Used

S3, Lex, ElasticSearch, Rekognition, Lambda function, CodePipeline, CloudFormation and API Gateway

Architecture

image1

Implementation

  1. Launch an ElasticSearch instance1 using AWS ElasticSearch service , create a new domain called “photos”.

  2. Upload & index photos

  1. Search
    • Create a Lambda function (LF2) called “search-photos”.
    • Create an Amazon Lex bot to handle search queries.
    • Create one intent named “SearchIntent”.
    • Add training utterances to the intent, such that the bot can pick up both keyword searches (“trees”, “birds”), as well as sentence searches (“show me trees”, “show me photos with trees and birds in them”).
  1. Frontend
    • Build a simple frontend application that allows users to:
    • Make search requests to the GET /search endpoint
    • Display the results (photos) resulting from the query
    • Upload new photos using the PUT /photos
      * Create a S3 bucket for your frontend (B1). * Set up the bucket for static website hosting (same as HW1). * Upload the frontend files to the bucket (B2). * Integrate the API Gateway-generated SDK (SDK1) into the frontend, to connect your API.
  2. Deploy your code using AWS CodePipeline
  3. Create a AWS CloudFormation template for the stack