2

I'm building a web app. I have a database of books indexed in ElasticSearch and REST API written in PHP.

In the app, there's a search box, where I type a name of the book and JS script calls the search request which than runs a curl request with the search query to the ElasticSearch.

Problem is, when user types fast, there are too many requests. It starts to slow down and even though normaly a single request lasts about 200ms, it goes up to 5-10s which is too long. I could run fewer requests but I want that instant feedback.

So I ask - is it that very core of curl on my server runs only one request at a time even though they are called in separate PHP requests or is it something else?

Michal Artazov
  • 175
  • 2
  • 6
  • "when user types fast, there are too many requests", why?, are you using auto-complete searches?, and if so on what letter position you start searching? (you can increment that value) – LinuxDevOps May 23 '14 at 20:39

2 Answers2

4

Short answer is no it isn't asynchronous. Longer answer is "Not unless you wrote the backend yourself to do so."

If you're using XHR, each request is going to have a different worker thread on the backend which means no request should block any other, barring hitting process and memory limits. While XHR presents an event-based interface, it's still a live HTTP request being handled synchronously by the browser (you only get 1 thread in js). The php backend is also synchronously making curl calls, not returning results for your XHR's http request until the curl call finishes. Now, you could set your javascript up to poll for results, but since your time-to-live is <3-5 seconds, it's not worth it.

If you're using websockets, then you'll have to tell us. A given websocket is tied to one process on the backend, but you can fork/thread/do whatever in that process. It also allows you to push events to the browser directly without the client initiating a request. This could be asynchronous, but if the slowdown is in your backend, it isn't going to help to change to an asynchronous design.

Realistically, you should have your client javascript wait to issue the next search until the last search returns. On the back side, to prevent DOS, if a single client starts sending you too many completion search requests, start dropping the requests with HTTP 429 and handle 429 responses in your JS to do incremental backoff and retry later if appropriate.

Another thing you should definitely be doing is setting the request timeout in curl to something much lower so it times out appropriately. If the search data is only useful for 2-3 seconds, your curl request timeout should be about the same. If your backend is intelligent enough, it will interpret a closed connection to mean "stop searching" and hopefully you'll stop the resource loss in time for another process to use them.

Andrew Domaszek
  • 5,103
  • 1
  • 14
  • 26
3

The way I understand it, you have a webserver, which runs some script that can be used for autocomplete. This script runs a query against another server using cURL.

First, to answer your question: your webserver, in all likelihood, can run multiple PHP processes in parallel, and since cURL is called upon by PHP, it too will run in parallel. You didn't tell which webserver you run, but most support this.

However, it looks like your setup is quite network-intensive: Each key press will generate a request to your server, which will generate a request to the other server. If either your server lacks resources, or if the other server has a rate-limiter in place, you are going to get bad performance. Maybe cache the results on your webserver (so you don't have to cURL so often) or make your Javascript wait a few milliseconds so fast people don't trigger too many requests.

jornane
  • 1,096
  • 1
  • 8
  • 25