TAO

      153
Open Source

Platforms Infrastructure Systems Physical Infrastructure Video thangvi.com & AR/VR Artificial Intelligence

Facebook puts an extremely demanding workload on its data backover. Every time any one of over a billion active sầu users visits Facebook through a desktop browser or on a Mobile device, they are presented with hundreds of pieces of information from the social graph. Users see News Feed stories; comments, likes, và shares for those stories; photos và check-ins from their friends — the list goes on. The high degree of output customization, combined with a high update rate of a typical user’s News Feed, makes it impossible khổng lồ generate the views presented lớn users ahead of time. Thus, the data set must be retrieved và rendered on the fly in a few hundred milliseconds.

Bạn đang xem: Tao

This challenge is made more difficult because the data set is not easily partitionable, and by the tendency of some items, such as photos of celebrities, lớn have request rates that can spike significantly. Multiply this by the millions of times per second this kind of highly customized data mix must be delivered to users, & you have a constantly changing, read-dominated workload that is incredibly challenging lớn serve efficiently.

Memcabịt and MySQL

Facebook has always realized that even the best relational database công nghệ available is a poor match for this challenge unless it is supplemented by a large distributed cabít that offloads the persistent store. Memcabít has played that role since Mark Zuckerberg installed it on Facebook’s Apache web servers baông xã in 2005. As efficient as MySquốc lộ is at managing data on disk, the assumptions built into the InnoDB buffer pool algorithms don’t match the request pattern of serving the social graph. The spatial locality on ordered data sets that a bloông xã cabít attempts khổng lồ exploit is not comtháng in Facebook workloads. Instead, what we hotline creation time locality dominates the workload — a data nhà cửa is likely lớn be accessed if it has been recently created. Another source of mismatch between our workload & the kiến thiết assumptions of a blochồng cabít is the fact that a relatively large percentage of requests are for relations that do not exist — e.g., “Does this user lượt thích that story?” is false for most of the stories in a user’s News Feed. Given the overall laông xã of spatial locality, pulling several kilobytes of data into lớn a bloông chồng cabít to answer such queries just pollutes the cabịt and contributes lớn the lower overall hit rate in the bloông xã cabịt of a persistent store.

The use of memcabịt vastly improved the memory efficiency of caching the social graph & allowed us to scale in a cost-effective sầu way. However, the code that sản phẩm engineers had to write for storing & retrieving their data became quite complex. Even though memcabịt has “cache” in its name, it’s really a general-purpose networked in-memory data store with a key-value data Mã Sản Phẩm. It will not automatically fill itself on a cabít miss or maintain cache consistency. Product engineers had to work with two data stores and very different data models: a large cluster of MySQL servers for storing data persistently in relational tables, và an equally large collection of memcabít servers for storing and serving flat key-value pairs derived (some indirectly) from the results of SQL queries. Even with most of the comtháng chores encapsulated in a data access library, using the memcache-MySquốc lộ combination efficiently as a data store required quite a bit of knowledge of system internals on the part of hàng hóa engineers. Inevitably, some made mistakes that led to lớn bugs, user-visible inconsistencies, and site performance issues. In addition, changing table schemas as products evolved required coordination between engineers và MySquốc lộ cluster operators. This slowed down the change-debug-release cycle & didn’t fit well with Facebook’s “move sầu fast” development philosophy.

Objects & associations

In 2007, a few Facebook engineers set out lớn define new data storage abstractions that would fit the needs of all but the most demanding features of the site while hiding most of the complexity of the underlying distributed data store from sản phẩm engineers. The Objects & Associations API that they created was based on the graph data Mã Sản Phẩm & was initially implemented in PHPhường & ran on Facebook’s web servers. It represented data items as nodes (objects), & relationships between them as edges (associations). The API was an immediate success, with several high-protệp tin features, such as likes, pages, and events implemented entirely on objects và associations, with no direct memcabịt or MySquốc lộ calls.

As adoption of the new API grew, several limitations of the client-side implementation became apparent. First, small incremental updates to lớn a danh mục of edges required invalidation of the entire công trình that stored the menu in cache, reducing hit rate. Second, requests operating on a danh sách of edges had to always transfer the entire danh sách from memcabịt servers over to the web servers, even if the final result contained only a few edges or was empty. This wasted network bandwidth và CPU cycles. Third, cađậy consistency was difficult lớn maintain. Finally, avoiding thundering herds in a purely client-side implementation required a form of distributed coordination that was not available for memcache-backed data at the time.

All those problems could be solved directly by writing a custom distributed service designed around objects và associations. In early 2009, a team of Facebook infrastructure engineers started to lớn work on TAO (“The Associations and Objects”). TAO has now been in production for several years. It runs on a large collection of geographically distributed VPS clusters. TAO serves thousands of data types and handles over a billion read requests & millions of write requests every second. Before we take a look at its design, let’s quickly go over the graph data Model & the API that TAO implements.

TAO data Mã Sản Phẩm và API

*

This simple example shows a subgraph of objects and associations that is created in TAO after Alice checks in at the Golden Gate Bridge & tags Bob there, while Cathy comments on the check-in và David likes it. Every data công trình, such as a user, check-in, or bình luận, is represented by a typed object containing a dictionary of named fields. Relationships between objects, such as “liked by” or “friend of,” are represented by typed edges (associations) grouped in association lists by their origin. Multiple associations may connect the same pair of objects as long as the types of all those associations are distinct. Together objects và associations form a labeled directed multigraph.

For every association type a so-called inverse type can be specified. Whenever an edge of the direct type is created or deleted between objects with quality IDs id1 & id2, TAO will automatically create or delete an edge of the corresponding inverse type in the opposite direction (id2 lớn id1). The intent is lớn help the application programmer maintain referential integrity for relationships that are naturally mutual, like friendship, or where support for graph traversal in both directions is performance critical, as for example in “likes” & “liked by.”

The phối of operations on objects is of the fairly comtháng create / set-fields / get / delete variety. All objects of a given type have the same mix of fields. New fields can be registered for an object type at any time and existing fields can be marked deprecated by editing that type’s schema. In most cases product engineers can change the schemas of their types without any operational work.

Associations are created and deleted as individual edges. If the association type has an inverse type defined, an inverse edge is created automatically. The API helps the data store exploit the creation-time locality of workload by requiring every association to have sầu a special time attribute that is commonly used to represent the creation time of association. TAO uses the association time value lớn optimize the working set in cabịt & lớn improve hit rate.

Xem thêm: Ban Buon Ban Le Hang Thai Lan ? 6 Chợ Sĩ Không Thể Bỏ Qua Đi Buôn Hàng Thái Lan

There are three main classes of read operations on associations:

Point queries look up specific associations identified by their (id1, type, id2) triplets. Most often they are used khổng lồ check if two objects are connected by an association or not, or to lớn fetch data for an association.Range queries find outgoing associations given an (id1, type) pair. Associations are ordered by time, so these queries are commonly used khổng lồ answer questions like “What are the 50 most recent comments on this piece of content?” Cursor-based iteration is provided as well.Count queries give sầu the total number of outgoing associations for an (id1, type) pair. TAO optionally keeps traông xã of counts as association lists grow and shrink, & can report them in constant time

We have sầu kept the TAO API simple on purpose. For instance, it does not offer any operations for complex traversals or pattern matching on the graph. Executing such queries while responding to lớn a user request is almost always a suboptimal thiết kế decision. TAO does not offer a server-side phối intersection primitive sầu. Instead we provide a client library function. The laông chồng of clustering in the data phối virtually guarantees that having the client orchestrate the intersection through a sequence of simple point & range queries on associations will require about the same amount of network bandwidth and processing power as doing such intersections entirely on the server side. The simplithành phố of TAO API helps hàng hóa engineers find an optimal division of labor between application servers, data store servers, và the network connecting them.

Implementation

The TAO service runs across a collection of hệ thống clusters geographically distributed & organized logically as a tree. Separate clusters are used for storing objects và associations persistently, and for caching them in RAM & Flash memory. This separation allows us lớn scale different types of clusters independently & to lớn make efficient use of the server hardware.

Client requests are always sent khổng lồ caching clusters running TAO servers. In addition lớn satisfying most read requests from a write-through cabịt, TAO servers orchestrate the execution of writes và maintain cabít consistency among muốn all TAO clusters. We continue to use MySQL to manage persistent storage for TAO objects và associations.

The data set managed by TAO is partitioned inlớn hundreds of thousands of shards. All objects and associations in the same shard are stored persistently in the same MySQL database, and are cached on the same mix of servers in each caching cluster. Individual objects và associations can optionally be assigned to specific shards at creation time. Controlling the degree of data collocation proved to be an important optimization technique for reducing communication overhead và avoiding hot spots.

Shards can be migrated or cloned ahy vọng servers in the same cluster to equalize the load & to smooth out load spikes. Load spikes are common và happen when a handful of objects or associations become extremely popular as they appear in the News Feeds of tens of millions of users at the same time.

There are two tiers of caching clusters in each geographical region. Clients talk to the first tier, called followers. If a cađậy miss occurs on the follower, the follower attempts to lớn fill its cađậy from a second tier, called a leader. Leaders talk directly to lớn a MySquốc lộ cluster in that region. All TAO writes go through followers lớn leaders. Caches are updated as the reply lớn a successful write propagates back down the chain of clusters. Leaders are responsible for maintaining cađậy consistency within a region. They also act as secondary caches, with an option lớn cađậy objects and associations in Flash. Last but not least, they provide an additional safety net lớn protect the persistent store during planned or unplanned outages.

Consistency

We chose eventual consistency as the default consistency model for TAO. Our choice was driven by both performance considerations and the inescapable consequences of CAP theorem for practical distributed systems, where machine failures & network partitioning (even within the data center) are a virtual certainty. For many of our products, TAO losing consistency is a lesser evil than losing availability. TAO tries hard to guarantee with high probability that users always see their own updates. For the few use cases requiring strong consistency, TAO clients may override the mặc định policy at the expense of higher processing cost và potential loss of availability.

We run TAO as single primary region per shard và rely on MySQL replication khổng lồ propagate updates from the region where the shard is primary to lớn all other regions (secondary regions). A secondary region cannot update the shard in its regional persistent store. It forwards all writes to lớn the shard’s primary region. The write-through thiết kế of cabịt simplifies maintaining read-after-write consistency for writes that are made in a secondary region for the affected shard. If necessary, the primary region can be switched lớn another region at any time. This is an automated procedure that is commonly used for restoring availability when a hardware failure brings down a MySquốc lộ instance.

A massive amount of effort has gone into making TAO the easy to lớn use và powerful distributed data store that it is today. TAO has become one of the most important data stores at Facebook — the power of graph helps us tame the demanding & dynamic social workload. For more details on the thiết kế, implementation, & performance of TAO I invite you to lớn read our technical paper published in the Usenix ATC ‘13 proceedings.

Thanks lớn all the engineers who worked on building TAO: Zach Amsden, Nathan Bronson, George Cabrera, Prasad Chakka, Tom Conerly, Peter Dimov, Hui Ding, Mark Drayton, Jack Ferris, Anthony Giardullo, Sathya Gunasekar, Sachin Kulkarni, Nathan Lawrence, Bo Liu, Sarang Masti, Jlặng Meyering, Dmitri Petrov, Hal Prince, Lovro Puzar, Terry Shen, Tony Savor, David Goode, và Venkat Venkataramani.

In an effort to lớn be more inclusive in our language, we have sầu edited this post lớn replace the terms “master” & “slave” with “primary” and “secondary.”