-
Notifications
You must be signed in to change notification settings - Fork 132
Is there a way to decrease the response time? #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello,
Due to this I am currently working on implementing a websocket connection based on @fhessel 's sources. With this you can establish a persistent connection which just needs to be opened once. I hope that this will speed up my ESP32-communication. But this job currently is ongoing - so, nothing I could provide to you today. |
The bottleneck is indeed the TLS handshake, and as far as I could investigate it, there is no way to get around it, and it will take 1-2 seconds. Even if you use the ESP32 just as a TLS client like in Espressif's WiFiClientSecure.ino example, you will find a delay of roughly 1 second if you print I added TLS session resumption some time ago (see f62add5). That would skip the "expensive" part of the handshake that uses public key crypto, but the client has to support it as well. You have a chance to configure/force this if the client is a standalone application (like smartphone apps), but if you want to access your ESP32 directly from within a web app, you most likely will not have a chance in doing so. Other measures that are implemented in the library, like using After all, the server is still running on a microcontroller with its limited resources. |
Regarding the session reuse, requests must come in within 5 min between each other, right? |
That's how it's currently implemented, but you may adjust the
That's defined in src/HTTPSServerConstants.hpp. Again you'll have to change the library's code directly. At the moment, it's set to 20 seconds. Keep in mind that the memory of the common ESP32 modules does not allow for more than 4-5 concurrent TLS connections, so the library limits it to 4 by default. This timeout is quite low by default to target common use cases, but if you have only a few, well-known clients, setting it to a higher value might be a better choice. For both parameters, the client has to accept the server's decision and might not reuse the session or close the connection on its own. |
Now with keep-alive it is almost perfect. What i found is that 60 sec * 5 min = 300 is not real. const char* cmd_word = params->getUrlParameter(0).c_str(); I retrieve the "0" parameters correctly when is use param/hello/1 but if i use const char* cmd_word = params->getUrlParameter(0).c_str(); then the first one is not retrieved correctly and the second one is the first value. |
Great that keep-alive works for you! For the session timeout, that's what I said about the client. You cannot be sure that it will accept the timeout, and it might open multiple TLS sessions in parallel, leading more than one handshake and more than one session ID. On reconnect, the client has to provide the ID for resumption, so that's nothing you can fully control on the server side. Even many HTTP client libraries won't let you control these details, I guess. For the parameters, I'm not able to reproduce the behavior that you have experienced (or maybe I just don't get the problem by now). Could you please provide me with the definition of your |
ResourceNode* urlParamNode = new ResourceNode("/param/*/*", "GET", &urlParamCallback); This is the one i use. |
(I edited your comment as Markdown messed up the important part with the slashes and asterisks...) I'll have a look at it. |
I finally could reproduce it. I assume you store permanent references to the temporary result of Example 1: Reference to result of void handleURLParam(HTTPRequest * req, HTTPResponse * res) {
ResourceParameters * params = req->getParams();
const char* par1 = params->getUrlParameter(0).c_str();
const char* par2 = params->getUrlParameter(1).c_str();
res->print("Parameter 1: ");
res->print(par1);
res->print("\nParameter 2: ");
res->print(par2);
} Output 1: GET /param/foofoofoo/barbarbar
Example 2: Store void handleURLParam(HTTPRequest * req, HTTPResponse * res) {
ResourceParameters * params = req->getParams();
std::string par1 = params->getUrlParameter(0); // Store std::string here
std::string par2 = params->getUrlParameter(1);
res->print("Parameter 1: ");
res->print(par1.c_str()); // Convert to c_str() on the fly
res->print("\nParameter 2: ");
res->print(par2.c_str());
} Output 2: GET /param/foofoofoo/barbarbar
If you write your code like in the second example, you should be fine. If you prefer to store C-like char arrays, you might want to use strncpy to move the data returned by |
Ok, at least there is a trace about it. I think i might have used your example code back in july which was something like this. |
Are there still things that need to be discussed, or may I close this issue for now? If you encounter any new problems with the lib, you can of course open another issue for them. |
Closed for now. |
Hello,
i resumed back my esp32 https domotic server a while ago.
I've also update my previous source with your new updated version in order to keep up with your fixes.
Anyway in some situations i would like to handle requests faster (right now it takes 2.4 seconds to do a digitalWrite).
I was wondering if you have any suggestion to decrease the time it takes to answer knowing my project is pretty much based on your Async example where based on GET and Parameters i do some actions (namely I/O). Ideally i would like to stay inside 1 second.
Are we hitting a uC bottleneck in your opinion?
The text was updated successfully, but these errors were encountered: