Our cost analysis
We measure a query’s potential cost in terms of its type complexity and response complexity. Our analyses are essentially weighted recursive sums, with list sizes limited by the limit arguments dictated through the slicing and connections patterns.
Since our approach relies on conventions, the schema must be accompanied by a configuration explaining how it follows these conventions. This configuration accomplishes three things:
- It labels the fields that contain limits, and the sub-fields (for the connections pattern) to which these limits apply.
- It provides default limits in the event that a limit field has no argument supplied.
- It optionally provides the weights to dictate how much each type and each resolver function costs. By default, a weight of 1 might be reasonable.
Guarantees Given a schema and a configuration, our analysis always yields an upper bound on a query’s cost in terms of its type complexity and response complexity. This guarantee assumes that all fields returning lists enforce their limit arguments and have a finite default limit. These are upper bounds because we assume that the underlying graph is pathological —that every list requested in the query will be fully populated.
Time and space complexity of our analysis
- Time: Our analysis works its way “outwards-in” in a breadth-first manner. It visits each node of the query once, because inner layers of the query cannot increase the number of nodes at outer layers of the response. It therefore runs in time linear in the size of the query.
- Space: The context of the recursive queries must be carried along in order to track the limits of sub-fields to handle the connections pattern. The convention (and our mplementation) only applies these limits to direct children, for a constant space cost. In general the space complexity matches the maximum degree of nesting within the query.
Implementation We implemented this approach, but have not yet persuaded IBM to open source it :-). We hope that open-source tools [4,5,6] will consider incorporating our ideas.
Evaluation
We investigated five questions. Our evaluation used two public GraphQL services, GitHub and Yelp.
Can the analysis (and configuration) be applied to real-world APIs?
Our analysis requires a GraphQL schema to be accompanied by a configuration of limit arguments, weights, and default limits.
We found it straightforward to develop a configuration for both schemas. Since GraphQL schemas follow naming conventions [2], we supported regular expressions to denote field names in our configuration language, and our configurations are thus far smaller than the corresponding schemas.
Do we always obtain upper bounds?
Yes! We developed an open-source GraphQL query generator [7] and generated 5,000 query-response pairs for each API. The predicted and actual complexities are plotted here. We guarantee upper bounds, so our predicted costs always lie above the line y=x in the figure.
Are these bounds tight enough to be useful?
This is a bit subjective. Our bounds may be over-estimates, but are as tight as possible with a static, data-agnostic approach. The results depend on the underlying data graph.
Data sparsity leads responses to be less complex than their worst-case potential. In the preceding figure our upper bound is close or equal to the actual type and resolve complexities of many queries —this can be seen in the high density of queries near the diagonals. Our bounds grow looser for more-complex queries. This follows intuition about the underlying graph: larger, more nested queries may not be satisfiable by an API’s real data.
We quantify this in the paper (see Table 2).
Is our analysis cheap enough to be applicable in middleware?
Yes! Beyond the functional correctness of our analysis, we assessed its runtime cost to see if it can be incorporated into existing request-response flows, e.g. in a GraphQL client or an API management middleware.
We measured costs on a 2017 MacBook Pro (8-core Intel i7 processor, 16 GB of memory). As predicted, our analysis runs in linear time as a function
of the query size. Nearly all queries could be analyzed in under 5 milliseconds.
How does our approach compare to other static analyses?
We compared against open-source and closed-source analyses.
Open source We compared our approach against those of three open-source libraries [4,5,6]. The following figure summarizes the results. Note that libA produces wild over-estimates, while libB and libC follow identical approaches that are susceptible to under-estimates.
from Hacker News https://ift.tt/2NBuozZ
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.