-
Notifications
You must be signed in to change notification settings - Fork 1
Story Upload Flow
The following describes how our client app uploads stories to AWS, how AWS processes them, and how our server responds to events during this flow.
You should read everything that follows, but from the perspective of the client app alone, here is the simplest description of the story upload flow:
- The client app sends an authenticated request to the
StoryJob#createendpoint, which responds with areferenceIdand temporary security credentials so that the client app can upload to our S3 instance without storing AWS credentials itself. - The client app uploads the story it has created for a set of live photos to S3 at:
Bucket:appName-input
Filename:<referenceId>.<extension> - AWS and our API handle the rest, as described below in detail.
Following the selection or creation of Story media (a video file) on the client device, the client app will initialize sending the Story media to the server.
To do so, the client app should first make a request to our StoryJob#create endpoint. Doing so will create a StoryJob model in our database for the user the JWT token passed with the request references. Please note this detail: Any time you hit the StoryJob#create endpoint the server will create a new StoryJob for the user. You don't want to spam this.
After creating a new StoryJob for the user, the server will respond by passing down a referenceId in addition to temporary AWS security credentials that will allow files to be added to the environment's input bucket. The reference id will be used for the following:
- Naming the file sent to S3.
- Tracking the file sent to S3.
- Looking up the StoryJob, and therefore the User for the StoryJob, when AWS notifies the server that the uploaded file has finished transcoding.
- Creating references to transcoded files for the Story created from the StoryJob once transcoding completes.
(Only one of these details is important to the client app at this time.)
After retrieving a referenceId and AWS security credentials for a new StoryJob for the Story media on the client device, the client app should upload this file to Amazon S3 at:
Bucket: appName-input
Filename: <referenceId>.<extension>
And include the accessKeyId, secretAccessKey, and sessionToken returned with the original request for a new StoryJob under aws.credentials in the payload. See endpoint documentation for details.
Additionally, when posting to S3 the app should also send metadata required to properly transcode and move the source file. This information is as follows:
Metadata : {
type : [file extension],
width : [video width],
height : [video height],
duration : [video duration in seconds]
}
After the file is uploaded to S3, our AWS setup will automatically use the Elastic Transcoder Pipeline (ETP) to transcode the uploaded file. This will create two file formats for use on the web (.mp4 and .webm) and output them into another S3 bucket named appName-output.
Bucket: appName-output
Filenames:
<referenceId>/mp4-<referenceId>.mp4
<referenceId>/webm-<referenceId>.webm
NOTE: For a full description of our AWS workflow, please see our AWS documentation for illustrations and descriptions.
Once these files are created, the AWS Elastic Transcoder Pipeline will trigger Amazon's Simple Notification System (SNS) to notify our server at api/:version/story_jobs/transcode_complete_webhook, delivering a payload of information such as the original referenceId and meta data pertaining to the newly created files residing in appName-output.
Here is a sample response:
{
referenceId : "5fa0a2d6-46cb-4cff-82ab-ef35a73c4fab",
state : "COMPLETED",
"outputs" : [ {
"key" : "mp4-5fa0a2d6-46cb-4cff-82ab-ef35a73c4fab.mp4",
"type" : "mp4",
"duration" : 4,
"width" : 540,
"height" : 720
}, {
"key" : "webm-5fa0a2d6-46cb-4cff-82ab-ef35a73c4fab.webm",
"type" : "webm",
"duration" : 4,
"width" : 540,
"height" : 720
}, {
"key" : "mov-5fa0a2d6-46cb-4cff-82ab-ef35a73c4fab.mov",
"type" : "mov",
"duration" : 4,
"width" : 1080, // Note: Original file keeps size.
"height" : 1440 // Note: Original file keeps size.
} ]
}
When the transcode completion webhook endpoint receives a callback from AWS SNS, it uses the payload information to deactivate the associated StoryJob object (via the referenceId inferred from the outputKeyPrefix key within the callback payload) and conditionally create a new Story if the state of the payload has a value of COMPLETE. Note that for the time being we do not otherwise handle failure here and simply deactivate the associated StoryJob, taking no further action.
Following the successful creation of a new Story for the user associated with the StoryJob, we set the story to be the primaryStory for the user based on the following criteria:
- If the associated
StoryJobwas the lastStoryJobcreated for the user, set theStorytoprimaryStory. - If the associated
StoryJobwas not the lastStoryJobcreated for the user, but there are noStoryJobobjects created after this specificStoryJobwho yet have theirresponseStateattribute set toCOMPLETE, set theStorytoprimaryStory(until the later-createdStoryJobs complete). - If the associated
StoryJobwas not the lastStoryJobcreated for the user, and there exists aStoryJobcreated later than thisStoryJobthat has aresponseStateofCOMPLETE, save the newStorybut keep itsprimaryStoryattribute set tofalse.
Note: Whenever a Story becomes the primaryStory for a user, all other Story objects for that user have their primaryStory attribute set to false automatically.
This is probably a little confusing, so lets see it illustrated:

-
SCENARIO A:
StoryJob A is created and the AWS callback hits our server. Story A is set toprimaryStory. Later, StoryJob B is created, the AWS callback hits our server, and Story B is set toprimaryStory. -
SCENARIO B:
StoryJob A is created, immediately followed by StoryJob B being created, before AWS has finished transcoding the media for StoryJob A. AWS completes its work for StoryJob A and the AWS callback hits our server. Although StoryJob B was created after StoryJob A, StoryJob B's work has not yet been completed by AWS, so StoryJob A's Story is set toprimaryStory. Soon after, AWS completes its work for StoryJob B, the AWS callback hits our server, and StoryJob B's Story becomes theprimaryStory. -
SCENARIO C:
StoryJob A is created, immediately followed by StoryJob B being created, before AWS has finished transcoding the media for StoryJob A. There is an unknown delay in the completion of the work for StoryJob A by AWS, and for whatever reason StoryJob B finishes first, for which the AWS callback hits our server. StoryJob B, being the last created StoryJob for its user, creates a new Story set to be theprimaryStory. When StoryJob A's work is later completed by AWS, the callback hits our sever, creating a Story for the user, but not setting it to be theprimaryStory.
The underlying logic for all of the above is that the last story a user creates should always become their primary story, but only after AWS has successfully completed its work for each story.