Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REST Service to Enqueue Tasks #14

Open
ripienaar opened this issue Feb 1, 2022 · 5 comments
Open

REST Service to Enqueue Tasks #14

ripienaar opened this issue Feb 1, 2022 · 5 comments
Labels
enhancement New feature or request

Comments

@ripienaar
Copy link
Member

A REST service should be able to front a specific queue or multiple queues, it should accept requests over HTTP(s) and enqueue into JetStream

@ripienaar ripienaar added the enhancement New feature or request label Feb 1, 2022
@drev74
Copy link
Contributor

drev74 commented Apr 27, 2022

I'm working on smth like that on my test setup. I'm building an API for a customer and playing with asyncjobs to implement it.

My current setup

  1. API Gateway - an HTTP server Echo with JWT Auth
  2. JetStream + two asyncjobs based queues, similar to this. PING and PONG are ingress and egress queues accordingly
  3. SDK service to process tasks

How it works

  1. A REST request with a JWT token comes into an API gateway. It validates JWT, checks API semantics and creates an internal JOB.
  2. API Gateway uses asyncjobs to create a task and commit it to the PING queue
  3. An SDK service is implemented as a handler to process the PING queue and write results back into the PONG queue.

Current issues

  1. This system is fully async, thanks to NATS, JetStream and asyncjobs.
  2. Each SDK call takes different time to process. So the queue processing MUST be non-blocking and parallel
  3. When the API gateway commits a task into the PING queue, it must preserve the echo.context() to be able to response to the input REST request. However, API gateway only commits a task into a queue and doesn't preserve any call info nor request ordering.
       // Repack to the full Runner Result
	out := APIResponseMessage{
		Service: serviceName,
		Api:     resourceName,
		Data:    res,
	}

	buf, err := json.Marshal(&out)
	if err != nil {
		return sendAPIError(ctx, http.StatusInternalServerError, "Failed to serialize Runner Response")
	}

	// Now, we have to return the Runner response
	err = ctx.JSONBlob(http.StatusCreated, buf)
	if err != nil {
		return sendAPIError(ctx, http.StatusInternalServerError, "Failed to send response")
	}

So my best idea is to save an REST request context echo.context() in a KV-store, and use a generated JOB ID as a meta info. This JOB ID is related to request only and should be preserved by all API calls as a meta data.

Wrapping up, the issue implementing this REST service is to relate the request context with asyncjobs queue calls.
I've NOT yet used JetStream or NATS as a KV-store, but it looks like capable doing that.

It's also possible to use Redis, but it seems an overhead to add another infra level just to impl an API.

Those are my thoughts on how to get that dome

@ripienaar
Copy link
Member Author

We can certainly expand the asyncjobs concept of a job here to have a map[string]interface{} style arbitrary "additional info" kind of field where you can store this?

@ripienaar
Copy link
Member Author

Our jobs are essentially store in a KV - though slightly specialised to disable some specific KV behaviours that we didn't need

@ripienaar
Copy link
Member Author

ripienaar commented Apr 30, 2022

For the API we'd need at least these I think, though open to suggestions of course.

  • GET /task/{id} - gets a single task ⬆️
  • POST /task/{id} - creates a single task (probably not needed tbh, see below)
  • DEL /task/{id} - deletes a task
  • POST /tasks - creates a task ⬆️
  • GET /tasks - gets multiple tasks, accepts some kind of filter, maybe paging limits+start
  • GET /tasks/purge - deletes all tasks
  • POST /tasks/retry - retries one or more tasks ⬆️

And then similar for Queues and Schedules tasks.

I'd like a server sent events endpoint to watch execution of a specific task also

Regarding other stuff - mainly I need a full work queue system with a DAG of related jobs and parallel execution. But I am working on some closed source related stuff which might become open later.

@ripienaar
Copy link
Member Author

Of course we can start with just the important ones - i added some emojis to indicate those

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants