For this assignment you will submit some architecture design documents for two more systems. You may work alone or in pairs (groups of three are not allowed). Please see HW5 instructions for the requirements.
If you worked alone on HW5 and you work alone on HW6, then then you may submit just one of these designs (it will be worth twice the credit). If you worked in a pair on HW5 but decided to do HW6 alone, then you must do 75% of HW6. Specifically, you should skip the api and schema documentation for one of two the architectures.
This assignment draws on all Lectures and all the textbook readings. At the very least, please catch up to Lecture 16 before completing this assignment.
Architecture 3: Ride-Hailing App
Design an architecture to implement something like Uber or Lyft. Please remember:
- There are two different smartphone apps: customer and driver.
- There is a
route planning system
that gives intelligent directions by learning from past rides.
Note: Storing all past ride location data allows the system to compute optimal routes and it also allows other "big data" analyses that are important to the business (eg., in which cities should be buy more advertisements to recruit more drivers?).
You do not have to submit any wireframes (UI drawings) for this architecture because we'll assume it's similar to existing ride-share apps.
The smartphone app can call a function provided by the OS to ask the GPS hardware for the current latitude/longitude location. The GPS hardware then listens for signals from GPS satellites to determine the current location. In other words, the device knows its location without making any network requests, and without interacting with any of your backend code, so you don't need to show GPS in your diagram.
There should be no single machine bottleneck. In particular, driver locations must be updated frequently, so there must be more than one machine handling these "location writes."
You must be able to quickly find drivers near a particular customer (using a DB key or index rather than scanning all drivers). There are at least two different good approaches for this.
Your route planning system will periodically run in the background to compute some kind of travel-time graph for each metro area, based on the location time-series data recorded from rides. The graph we are computing might have a few hundred thousand nodes representing important locations (such as certain road intersections) and edges between them recording the expected travel times. You may assume that the resulting travel graph is small enough to fit in one machine's RAM, but one machine cannot handle the work of calculating the shortest-path on this graph for all customers.
Architecture 4: Video Conferencing tool
Design an architecture to implement Zoom, Google Hangouts, or a similar app. Please limit the features of the app to the bare minimum, except I do want you to support:
You do not have to submit any wireframes (UI drawings) for this architecture because we'll assume it's similar to existing video conferencing tools. However, if you additional features that are difficult to explain then you may include some wireframes to help your explanation. Note that there is some good information about Zoom on the High Scalability blog.
- Your architecture here will look very different than the previous 3 because there is a ton of data flowing continuously between clients.
- Like Architecture 3, you should probably implement this with a native app (whether iPhone, Android, Mac, Windows, or Linux). This will give better client performance and eliminates the need to fetch UI from the backend.
- Data on the Internet is send in "packets," never in a true stream. You can think of a live audio video stream as a series of discrete data messages that are send periodically, perhaps one media segment every 50ms. The receiver reconstructs the continuous stream from these discrete segments.
- Calculating a virtual background is a computationally expensive operation. Think carefully about where is the most scalable place to run this calculation. Notice also that the user will want to see their virtual background applied on the image they see of themselves in the app.
- A large call may have hundreds of callers with video enabled, but each client will only need to download the video stream for the subset of at most 49 participants who are visible on their screen. Also, different viewers will be downloading video at different resolution/quality levels. If you've using the gallery view, then you'll be downloading lots of small/low-quality video feeds, but if you're using the "talker" view then you'll be downloading one high-resolution feed and maybe a few low-res feeds.
- You may assume that the client can simultaneously upload their video in multiple quality levels (see Zoom's Scalable Video Codec).
Turn in one big PDF for the team. Only one teammate should submit. The other should just submit the name of the teammate. The submitted document should clearly list the authors (including netids).