Instead log and send an internal error status back to the client.
This should be OK as long as the exception doesn't happen after
the caller called req.response() already.
This function now takes any remaining arguments passed on the command line
after kore parsed its own.
For C the new prototype looks like this:
void kore_parent_configure(int argc, char **argv);
For python code, kore will pass each argument to the function so you
can do things like:
def kore_parent_configure(arg1, arg2):
The HTTP layer used to make a copy of each incoming header and its
value for a request. Stop doing that and make HTTP headers zero-copy
all across the board.
This change comes with some api function changes, notably the
http_request_header() function which now takes a const char ** rather
than a char ** out pointer.
This commit also constifies several members of http_request, beware.
Additional rework how the worker processes deal with the accept lock.
Before:
if a worker held the accept lock and it accepted a new connection
it would release the lock for others and back off for 500ms before
attempting to grab the lock again.
This approach worked but under high load this starts becoming obvious.
Now:
- workers not holding the accept lock and not having any connections
will wait less long before returning from kore_platform_event_wait().
- workers not holding the accept lock will no longer blindly wait
an arbitrary amount in kore_platform_event_wait() but will look
at how long until the next lock grab is and base their timeout
on that.
- if a worker its next_lock timeout is up and failed to grab the
lock it will try again in half the time again.
- the worker process holding the lock will when releasing the lock
double check if it still has space for newer connections, if it does
it will keep the lock until it is full. This prevents the lock from
bouncing between several non busy worker processes all the time.
Additional fixes:
- Reduce the number of times we check the timeout list, only do it twice
per second rather then every event tick.
- Fix solo worker count for TLS (we actually hold two processes, not one).
- Make sure we don't accidentally miscalculate the idle time causing new
connections under heavy load to instantly drop.
- Swap from gettimeofday() to clock_gettime() now that MacOS caught up.
When the pgsql layer was introduced it was tightly coupled with the
http layer in order to make async work fluently.
The time has come to split these up and follow the same method we
used for tasks, allowing either http requests to be tied to a pgsql
data structure or a simple callback function.
This also reworks the internal queueing of pgsql requests until
connections to the db are available again.
The following API functions were changes:
- kore_pgsql_query_init() -> kore_pgsql_setup()
no longer takes an http_request parameter.
- NEW kore_pgsql_init()
must be called before operating on an kore_pgsql structure.
- NEW kore_pgsql_bind_request()
binds an http_request to a kore_pgsql data structure.
- NEW kore_pgsql_bind_callback()
binds a callback to a kore_pgsql data structure.
With all of this you can now build kore with PGSQL=1 NOHTTP=1.
The pgsql/ example has been updated to reflect these changes and
new features.
Convert the data parameter to a string if the op was WEBSOCKET_OP_TEXT
or convert to bytes if op was WEBSOCKET_OP_BINARY so the callee does
not have to do this anymore.
Also let this function reset offset and lengths for http_body_read().
Make sure of this function in the python code so req.body can be called
multiple times in succession.
This commit adds the ability to use python "await" to suspend
execution of your page handler until the query sent to postgresql
has returned a result.
This is built upon the existing asynchrous query framework Kore had.
With this you can now write stuff like:
async def page(req):
result = await req.pgsql("db", "SELECT name FROM table");
req.response(200, json.dumps(result).encode("utf-8"))
The above code will fire off a query and suspend itself so Kore can
take care of business as usual until the query is successful at which
point Kore will jump back into the handler and resume.
This does not use threading, it's purely based on Python's excellent
coroutines and generators and Kore its built-in pgsql support.