-
Notifications
You must be signed in to change notification settings - Fork 135
Support forms in POST #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
No, you're not. Request params will contain only the values passed after the question mark in the request URL, like https://esp32.local/?foo=bar which will lead to Both At the moment, the server library does not support any special form body parsing, as I expected it mostly to be used for REST-like services, which can use other third-party libraries like ArduinoJSON. So by now, you would have to do it on your own by reading the request body. If you use urlencoded forms, you could start by reusing my implementation for parsing URL parameters. Besides the initial I'll tag this as a feature requests, but as you might've seen by my reaction on other issues, I'm a bit short on time at the moment. |
I totally understand. Thanks for replying. |
For uploading files, If you need to stick to plain HTML forms, that's also the only option I know for uploading files. So you'll need to parse the The only other option that comes to my mind is to read the file in the browser with client side JavaScript and send a custom POST XHR to the server that contains only the file's content as body. Then you could just read the whole request body on the server. But that might not be feasible in every use case and it's only an easier solution if you've got some experience with JS and the characteristics of various browser's JS engines. |
I've implemented I'm attaching my source code here for future reference as a starting point. Of course, it is far from perfect. For now, it keeps the entire field value in memory, which makes big files handling impossible. Instead, it should call the handler for every received chunk of the file. Oh well.
|
Thank you for providing the code, that will be a really good starting point! I also thought about how one could deal with large uploads and I came up with an API that allows to iterate over the body's fields and then read each field step by step using a buffer, so that you could e.g. directly write it to an SD Card. That should allow for arbitrary body sizes, but comes with the downside of not being able to access random fields of the body. To address both ways of body encoding (as urlencoded has way less overhead for short text values) there could be a parser for each content type based on the same body parser API, making the content encoding easily interchangeable. Usage should then be something like this: void handleRequest(HTTPRequest * req, HTTPResponse * res) {
HTTPMultipartBodyParser * bParser = new HTTPMultipartBodyParser(req);
while(bParser->nextField()) {
std::string fieldName = bParser->getFieldName();
byte buf[128];
size_t len;
while(bParser->getRemainingLength()>0) {
len = bParser->read(buf,128);
// Do something with buf[0..len] and fieldName
}
}
} It's just a draft of the API and most likely won't even compile at the time, but I committed it for reference to the bodyparser branch (7861a98). |
I'm making good progress implementing this, based on the bodyparser branch, but I'm running into an issue with the suggested API. |
That API was really only a suggestion on how one could implement this, and I see the problem. I'd clearly put the ability of reading parts of arbitrary length over the ability to get the length in advance, so your
I think binary data may be the easy case. If the client decides to make use of a /**
* Removes quoted printable content transfer encoding
* data_in buffer to read from
* size_in size of data_in buffer
* data_out buffer to write the decoded data to
* size_out number of bytes written to data_out
* returns: Number of bytes read from data_in
*/
size_t transferDecodeQuotedPrintable(uint8_t *data_in, size_t size_in, uint8_t *data_out, size_t &size_out); That should allow handling all encodings while the caller can remain agnostic of the symbol length in the encoded data. He only has to check for |
I've created a pull request. I'm not 100% happy with the code, especially because I've had to implement yet another level of buffering (I saw the the existing buffered reading code but it's in private methods, and I've decided to implement my own because of the tricky way to test for boundaries while reading binary data). Performance seems sort-of decent despite the extra copying. |
I tried to send POST requests with forms. Tried both x-www-form-urlencoded and multipart/form-data, but request params array is empty in all cases. Does the server support handling HTML forms? Am I doing something wrong?
The text was updated successfully, but these errors were encountered: