Open
Description
Reading through and the documentation doesn't seem to be clear on how to handle long running processes that fetch data at intervals over time.
The "hello world" paradigm that we see here:
- Get a client
- Get a session from the client
- Perform an operation on the session
- Close the operation
- Close the session
- Close the client
If we have something like an http trigger in an Azure functions instance, what can be reused:
- Can we share a client as a singleton, or do we need a new one for each http request that comes in and executes a query?
- Can we share a session as a singleton, or do we need a new one for each http request that comes in and executes a query?
If resources can be used as singletons, what's the recommended error handling to make sure that they get cleaned up & reinitialized if there's an error?
The only guidance is just
"After you finish working with the operation, session or client, it is better to close it, each of them has a respective method (close())."
The basic question here: at what point do we consider ourselves "finished" working with each object?