Wrap REST as GraphQL

Mondo Technology Updated on 2024-02-01

From a front-end developer's perspective, GraphQL is a data layer paradigm that supports optimistic updates, declarative fetching of data next to React components, and WYSIWYG data. Popularized by Facebook, it has a lot more to offer than RESTful APIs. This article is accompanied by an example from a production environment and shows how to wrap an existing RESTful API into a front-end GraphQL API without impacting backend developers.

In a front-end project that I took over from an outsourcer, I saw a lot of interesting relics, such as useless imports that were copied and pasted between pages, and many suffixes with .bak but an old page that is not much different from another page, childlike component naming, etc. The use of APIs is particularly impressive, something like this:

import * as mapi from '../../lib/mapi';// ..Omit irrelevant** componentwillmount() mapipie() then(json => )mapi.whoami() then(json => )
It looks a bit bloated because there's a lot of repetition in it**.

It looks a bit confusing because we can't see what's in piedata. If you have a deeper level, it's easy to get outcannot read property 'machine' of undefinedorundefined is not an object, especially when you're also using es6 to calculate property values, the debug time becomes a bit long.

There are also a lot of other problems, such as having to do a non-null check of the data in the UI.

The problem here is that we're putting the data in a state, which doesn't provide type checking and default values when null, like props.

While using a data flow framework such as Redux can solve most of the problem, since it is time to refactor, it is important to make this refactoring iteration live as long as possible - to solve the big problem that interferes with the developer's understanding. What I'm most interested in solving is the visibility of my data to me:

I want to be able to see what the data looks like when I write the UI, and preferably the data looks exactly like the structure of my UI, so that the process of binding data to the UI is easy and enjoyable. 」

The status quo is,Every time I bind data, I have to use postman to request the backend to see what the returned data looks like.,And then I call this API.,And take out a few of the data I need from the returned data and tie it to the UI.,It's very weak and three thousand take a scoop of grace.。 Because of business changes, I have to use several APIs in one UI component at the same time.

And I like to capitalize letters in common words, such as:deviceid, while the backend uncle likes lowercase, for exampledviceid。Oh my God, he missed an e when he spelled it!

Ideally, the backend should quickly follow up with business changes, split the APIs, and then flip through the baby name list to give each API a new meaningful name. But obviously, the backend won't put down a lot of things at hand to cooperate with you to transform the API, after all, why should the data follow the UI to grow a face, do you think your UI is very handsome? Besides, what if the demand changes again after three days? Let the backend rewrite the rest endpoint again, and he'll whip you with his leather coat.

Using both Redux and GraphQL solves these problems.

In the past, in REST, we divided the data into different paths according to the business, which is equivalent to giving each pile of data a name that was easy to understand at the time according to the business needs of the first version, and then hoping to use one in each business in the future

The ancient method of outsourcing personnelexport function whoami() export function pie() export function entry() export function districtpie(id) pie ); export function siteoverview(id) /overview`);export function sitepie(id) /pie`);export function cabinetsswitches(id) /cabinets/switches`);export function sitewarningquantity(id) `
In GraphQL, we represent the data as a tree and can request any of its leaves directly.

Similar to REST, we can also have easy-to-understand names, but instead of fantasizing about what data is hidden in a path, we can directly observe smaller granularity data, something like this:

Refactored API Account WhoamiType UserType API Info Entry API Data Site Overview Warzone information, indicating which strongholds or sub-warzones there are on the planet, and Warzone has its own **scale pie chart data It can also represent the data of the strongholds, which can include data such as **scale pie chart and transportation line list Type PowerentityType
We can see the relationship between each data type and the original REST endpoint in the comments, a data type can be cobbled together from multiple original RESTful data sources (e.g. integrating multiple microservices), or a REST data source can be reconstituted into multiple data types according to common sense, so that you can understand it when you reread it three months later.

There are two direct benefits to this transformation: First, you can directly see what the data source looks like! No more postman requests to scrape data because you can't remember what fields are available at each data endpoint. The second is that it provides static type checking, so you can think about what each type should look like when you write types, and then put aside the work of checking the availability of data and focus on the business......

There are also two indirect benefits: one is that she writes as if she was writing FlowType or TypeScript, you are now adding type annotations to the data, and when you start learning Rust in a few months, you will feel at home, and you can go from beginner to proficient in 21 minutes. The second is that you can annotate the data endpoint, you can use ?? The black question mark operator declaratively expresses your confusion about the business, and it can also be used with ! A rowing operator describes the indispensability of a data item.

The language we used to declare types above is called GraphQL, and GitHub is moving their API to GraphQL, so you can check how it was written. It is generally written in a schemajs file, something like this:

export const typedefinitions = `schema # ..Omit other types of declarations
It is written on a first line inside the first line of multi-line text to make it easier to see the line number debug. Let's export it as a constant typedefinitions.

After writing the data type declaration, we also need to write the schemaJS goes on to explain how each of these data types and the data they correspond to are fetched. We do this with an analytic function:

// schema.jsexport const resolvers = , args, context) ,username(, args, context) ,password(, args, context) ,token(, args, context) ,id(, args, context) ,name(, args, context) ,companyid(, args, context) ,companyname(, args, context) ,departmentid(, args, context) ,departmentname(, args, context) ,role(, args, context) ,// ..Omit other analytical functions, **subject to the actual product}; 
As we can see, we actually provide a function as a data source for each field that we may need, and when we need a few fields in the UI, the request we make will trigger a few functions here, so as to return only the data corresponding to the fields we need, and then the data will be type-checked and returned to the UI where we made the request.

That's probably how it feels to use it in the UI:

function mapstatetoprops(state) ;const mainpagedata = gql` query mainpagedata($token: string!) const queryconfig = ) => ( We configure GraphQL hoc to use the token in its props pollinterval: 10000, }props: (=> (connect(mapstatetoprops) Redux classic usage@graphql(MainPageData, QueryConfig) graphql hocexport default class main extends component ;  static defaultprops = , render()
In fact, the graphql data endpoint is also the schemaThe data types in js, as well as the parsing functions, should have been written by the backend uncle. However, the back-end uncle has a family and a life of his own, you shouldn't ask him to let go of many good things in life just for you to be more happy to write UI personally, we can't be so cruel.

A better approach would be to insert a simple GraphQL data endpoint as an intermediary between our UI layer and the data layer.

In this intermediary, we can split the API that originally returned a large chunk of data at once into fine-grained data types, can correct the typo in the data returned by the backend, and can check whether the data is empty, let's look at an example:

// rest2graphqlinterface.Most of the following are routines in js, you only need to modify parts of the model import apolloclient, from'apollo-client';import from 'graphql';import from 'lodash';import from './schemagenerator';import from './schema';import from './models';Modify this part import houtaidashuconnector from'./houtaidashuconnector';const typedefs = [.rootschema];const resolvers = merge(rootresolvers);const executableschema = makeexecutableschema();const serverconnector = new houtaidashuconnector();const rest2graphqlinterface = ), where you introduce your own connector },graphqlrequestvariables );const client = new apolloclient();export default client;
Then replace the Redux provider with the GraphQL members-only ApolloProvider:

From.

Become.

In this way, the chain of fetching data is roughly strung together, and you will find that the data flows like this:

Uncle backend's RESTful API ->Connector ->Model ->Resolver Functions -ExecutableSchema ->rest2graphqlinterface ->ApolloClient ->redux -> props of your UI -> your UI

As we saw earlier, when we created rest2graphqlinterface, we wrote:

user: new user(),
where user is a model.

Why is it written like that?

In fact, all the data acquisition, that is, the request to the backend uncle's RESTful API, we can put it in the analytic function, but we will write a lot of repetitive **, and for every little bit of data we have to fetch once, which is a waste of resources, a concept of weak water three thousand to take a scoop of drink.

So we abstract out a layer of model:

// models.jsexport class user ) async getloginstatus(token) catch (error) async getallmetadata(token) getmetadata(field, token) }
And if we want the model to focus on data caching and data cleaning, we have to abstract the part of the network request into a connector:

export default class houtaidashuconnector token=$`;return promise.try(()=> fetch(`$/$$`then(checkstatus) .then(response => response.json())then(json => return json.data; }
In this way, we can easily switch whether we are connected to the test server set up on the intranet or connected to the production server on the external network by switching the connector, without affecting the logic in the model and schema (of course, in fact, it will still be affected, because you are connected to the test server to update ** to adapt to the latest business).

From there, we set up a lightweight GraphQL server on the client side that acts as an intermediary between us and the backend. Based on in-process testing, this has no noticeable impact on performance, and good caching logic can even speed up page loading.

After using this technology, the acquisition of data is not written incomponentwillmount()It's not hidden in the redux reducer, not in the redux-saga generator, but really deep into the grassroots, right next to our UI, love the house and the black, kill two birds with one stone, what you see is what you get, and the waist is soft and easy to push down.

This client-side-graphql-server approach is also very suitable for controlling IoT devices, after all, it is impractical to set up server-side-graphql-server in IoT devices, and using RESTful APIs to write IoT control will be more bloated, and it is a feasible way to convert this data into graphql on the client side.

If you have any questions about the content, you can add China GraphQL User Group 302490951 to communicate. For example, when a comrade saw that I had written a tutorial on relay before, he asked me if relay was good or not, and I knew everything: It's hard to use, so now I switch to apollostack

wrapping a rest api in graphql apollo-client issue #

Related Pages