RESTinio is an asynchronous HTTP server. A simple example from practice: returning a large amount of data in response


Recently, I happened to work on an application that was supposed to control the speed of its outgoing connections. For example, connecting to one URL, the application should limit itself to, say, 200KiB / sec. And connecting to another URL - only 30KiB / sec.


The most interesting point here was testing these very limitations. I needed an HTTP server that would give traffic at some given speed, for example, 512KiB / sec. Then I could see if the application really withstands the speed of 200KiB / sec or if it breaks down to higher speeds.


But where to get such an HTTP server?


Since I have something to do with the RESTinio HTTP server embedded in C ++ applications, I haven’t come up with anything better than quickly throwing a simple HTTP test server on my knee that can send a long stream of outgoing data to the client.


About how simple it was and I would like to tell in the article. At the same time, find out in the comments whether this is really simple or whether I am deceiving myself. In principle, this article can be considered as a continuation of the previous article about RESTinio called "RESTinio is an asynchronous HTTP server. Asynchronous" . Therefore, if someone is interested to read about the real, albeit not very serious application of RESTinio, then you are welcome to cat.


General idea


The general idea of ​​the test server mentioned above is very simple: when the client connects to the server and performs an HTTP GET request, a timer is activated that runs once a second. When the timer is triggered, the next data block of a given size is sent to the client.


But everything is a little more complicated


If the client reads the data at a slower rate than the server sends, then just sending N kilobytes once a second is not a good idea. Since the data will begin to accumulate in the socket and this will not lead to anything good.


Therefore, when sending data, it is advisable to control the readiness of the socket for writing on the HTTP server side. As long as the socket is ready (that is, too much data has not yet accumulated in it), a new portion can be sent. But if it’s not ready, then you need to wait until the socket goes into a state of readiness for recording.


It sounds reasonable, but I / O operations are hidden in the giblets of RESTinio ... How can I find out if the next piece of data can be written or not?


You can get out of this situation if you use after-write notifiers , which are in RESTinio. For example, we can write this:


void request_handler(restinio::request_handle_t req) { req->create_response() //   . ... //   . .done([](const auto & ec) { ... //         . }); } 

The lambda passed to the done() method will be called when RESTinio finishes writing outgoing data. Accordingly, if the socket was not ready for recording for some time, then the lambda will not be called immediately, but after the socket comes to its proper state and accepts all outgoing data.


Due to the use of after-write notifiers, the logic of the test server will be as follows:



As a result, it turns out that as soon as the recording starts to slow down, sending new data will pause. And resume when the socket is ready to accept new outgoing data.


And a little more complicated: chunked_output


RESTinio supports three ways to generate a response to an HTTP request . The simplest method, which is used by default, is not suitable in this case, because I need an almost endless stream of outgoing data. And such a stream, of course, cannot be set_body to a single call to the set_body method.


Therefore, the described test server uses the so-called chunked_output . Those. when creating an answer, I indicate to RESTinio that the answer will be formed in parts. Then I just periodically call the append_chunk methods to add the next part to the answer and flush to write the accumulated parts to the socket.


And let's look into the code!


Perhaps it’s enough that the opening words are enough and it’s time to move on to reviewing the code itself, which can be found in this repository . Let's start with the request_processor function, which is called to process each valid HTTP request. At the same time, we will delve into those functions that are called from request_processor . Well, then we’ll see how exactly request_processor is mapped to one or another incoming HTTP request.


Request_processor function and its helpers


The request_processor function is called to process the HTTP GET requests I need. It is passed as arguments:



Inside the request_processor , an object is created with information about the request and its processing parameters, after which this very processing begins:


 void request_processor( asio_ns::io_context & ctx, std::size_t chunk_size, std::size_t count, restinio::request_handle_t req) { auto data = std::make_shared<response_data>( ctx, chunk_size, req->create_response<output_t>(), count); data->response_ .append_header(restinio::http_field::server, "RESTinio") .append_header_date_field() .append_header( restinio::http_field::content_type, "text/plain; charset=utf-8") .flush(); send_next_portion(data); } 

The response_data type, containing all request-related parameters, looks like this:


 struct response_data { asio_ns::io_context & io_ctx_; std::size_t chunk_size_; response_t response_; std::size_t counter_; response_data( asio_ns::io_context & io_ctx, std::size_t chunk_size, response_t response, std::size_t counter) : io_ctx_{io_ctx} , chunk_size_{chunk_size} , response_{std::move(response)} , counter_{counter} {} }; 

It should be noted here that one of the reasons for the appearance of the response_data structure is that an object of type restinio::response_builder_t<restinio::chunked_output_t> (namely, this type is hidden behind the short alias response_t ) is a moveable type, but not a copyable type (by analogies with std::unique_ptr ). Therefore, this object cannot just be captured in a lambda function, which then wraps itself in std::function . But if you place the response object in a dynamically created instance of response_data , then a smart pointer to the reponse_data instance reponse_data already be captured in lambda functions without problems and then save that lambda to std::function .


Send_next_portion function


The send_next_portion function send_next_portion called every time when it is necessary to send the next part of the response to the client. Nothing complicated happens in it, so it looks quite simple and concise:


 void send_next_portion(response_data_shptr data) { data->response_.append_chunk(make_buffer(data->chunk_size_)); if(1u == data->counter_) { data->response_.flush(); data->response_.done(); } else { data->counter_ -= 1u; data->response_.flush(make_done_handler(data)); } } 

Those. send the next part. And, if this part was the last, then we complete the processing of the request. And if not the last one, then a flush is sent to the flush method, which is created, perhaps, by the most complex function of this example.


Function make_done_handler


The make_done_handler function make_done_handler responsible for creating a lambda that will be passed to RESTinio as an after-write notifier. This notifier should check if the recording of the next part of the response has completed successfully. If yes, then you need to figure out whether the next part should be sent immediately (that is, there were "brakes" in the socket and the sending rate cannot be maintained), or after a pause. If you need a pause, then it is provided through a cocking timer.


In general, simple actions, but in the code you get a lambda inside a lambda, which can confuse people who are not used to the "modern" C ++. Which is not so few years to be called modern;)


 auto make_done_handler(response_data_shptr data) { const auto next_timepoint = steady_clock::now() + 1s; return [=](const auto & ec) { if(!ec) { const auto now = steady_clock::now(); if(now < next_timepoint) { auto timer = std::make_shared<asio_ns::steady_timer>(data->io_ctx_); timer->expires_after(next_timepoint - now); timer->async_wait([timer, data](const auto & ec) { if(!ec) send_next_portion(data); }); } else data->io_ctx_.post([data] { send_next_portion(data); }); } }; } 

In my opinion, the main difficulty in this code stems from the peculiarities of the creation and “platooning” of timers in Asio. In my opinion, it turns out somehow too verbose. But then there is, that is. But you do not need to attract any additional libraries.


Connecting an express-like router


The request_processor , send_next_portion and make_done_handler shown above, in general, made up the very first version of my test server, written literally in 15 or 20 minutes.


But after a couple of days of using this test server, it turned out that there was a serious drawback in it: it always returned the response stream at the same speed. Compiled at a speed of 512KiB / sec - gives all 512KiB / sec. Recompiled at a speed of 20KiB / sec - will give to everyone 20KiB / sec and nothing else. What was inconvenient, because it became necessary to be able to receive answers of different "thickness".


Then the idea came up: what if the return speed is requested directly in the URL? For example, they made a request to localhost:8080/ and received a response at a predetermined speed. And if you made a request to localhost:8080/128K , then they began to receive a response at a speed of 128KiB / sec.


Then the thought went even further: in the URL you can also specify the number of individual parts in the response. Those. localhost:8080/128K/3000 request localhost:8080/128K/3000 will produce a stream of 3000 parts at a speed of 128KiB / sec.


No problems. RESTinio has the ability to use a query router made under the influence of ExpressJS . As a result, there was such a function for describing handlers for incoming HTTP requests:


 auto make_router(asio_ns::io_context & ctx) { auto router = std::make_unique<router_t>(); router->http_get("/", [&ctx](auto req, auto) { request_processor(ctx, 100u*1024u, 10000u, std::move(req)); return restinio::request_accepted(); }); router->http_get( R"(/:value(\d+):multiplier([MmKkBb]?))", [&ctx](auto req, auto params) { const auto chunk_size = extract_chunk_size(params); if(0u != chunk_size) { request_processor(ctx, chunk_size, 10000u, std::move(req)); return restinio::request_accepted(); } else return restinio::request_rejected(); }); router->http_get( R"(/:value(\d+):multiplier([MmKkBb]?)/:count(\d+))", [&ctx](auto req, auto params) { const auto chunk_size = extract_chunk_size(params); const auto count = restinio::cast_to<std::size_t>(params["count"]); if(0u != chunk_size && 0u != count) { request_processor(ctx, chunk_size, count, std::move(req)); return restinio::request_accepted(); } else return restinio::request_rejected(); }); return router; } 

Here HTTP GET request handlers are formed for three types of URLs:



Where speed is a number that defines speed, and U is an optional multiplier that indicates in which units the speed is set. So 128 or 128b means a speed of 128 bytes per second. And 128k is 128 kilobytes per second.


Each URL has its own lambda function, which understands the received parameters, if everything is fine, it calls the request_processor function shown above.


The helper function extract_chunk_size as follows:


 std::size_t extract_chunk_size(const restinio::router::route_params_t & params) { const auto multiplier = [](const auto sv) noexcept -> std::size_t { if(sv.empty() || "B" == sv || "b" == sv) return 1u; else if("K" == sv || "k" == sv) return 1024u; else return 1024u*1024u; }; return restinio::cast_to<std::size_t>(params["value"]) * multiplier(params["multiplier"]); } 

Here, C ++ lambda is used to emulate local functions from other programming languages.


Main function


It remains to see how all this starts in the main function:


 using router_t = restinio::router::express_router_t<>; ... int main() { struct traits_t : public restinio::default_single_thread_traits_t { using logger_t = restinio::single_threaded_ostream_logger_t; using request_handler_t = router_t; }; asio_ns::io_context io_ctx; restinio::run( io_ctx, restinio::on_this_thread<traits_t>() .port(8080) .address("localhost") .write_http_response_timelimit(60s) .request_handler(make_router(io_ctx))); return 0; } 

What's going on here:


  1. Since I do not need an ordinary regular router of requests (which cannot do anything at all and puts all the work on the shoulders of the programmer), I define new properties for my HTTP server. To do this, I take the standard properties of a single-threaded HTTP server (type restinio::default_single_thread_traits_t ) and indicate that an instance of express-like router will be used as a request handler. At the same time, to control what is happening inside, I point out that the HTTP server uses a real logger (by default, null_logger_t used null_logger_t which null_logger_t nothing at all).
  2. Since I need to cock the timers inside the after-write notifiers, I need an io_context instance that I could work with. Therefore, I create it myself. This gives me the opportunity to pass a link to my io_context in the make_router function.
  3. It remains only to start the HTTP server in a single-threaded version on the io_context that I previously created. The restinio::run function will return control only when the HTTP server finishes its work.

Conclusion


The article did not show the full code of my test server, only its main points. The full code, which is slightly larger due to additional typedefs and auxiliary functions, is somewhat more authentic. You can see it here . At the time of writing, this is 185 lines, including blank lines and comments. Well, these 185 lines are written in a couple of approaches with a total duration of hardly more than an hour.


I liked this result and the task was interesting. In practical terms, the auxiliary tool I needed was quickly obtained. And in terms of the further development of RESTinio, some thoughts appeared.


In general, if someone else has not tried RESTinio, then I invite you to try. The project itself lives on BitBucket , there is a mirror on GitHub . You can ask a question or express your suggestions in the Google group or right here in the comments.

Source: https://habr.com/ru/post/462349/


All Articles