Jobnik is a lightweight web service that enables triggering, listing, monitoring, and retrieving logs from Kubernetes Jobs via a REST API. Ideal for event-driven workflows, CI/CD pipelines, and custom job execution needs.
- β Trigger Kubernetes jobs with custom environment variables and arguments
- π Jobnik UI. TBD
- π List jobs with pagination and metadata
- π Fetch logs from job pods
- π Automatically generates unique job names
- π§Ό Cleans up completed jobs 30 seconds after success
- β A Kubernetes cluster with appropriate RBAC permissions to create, list, and delete jobs
- β A base Job template deployed in your cluster
- β Docker image for your job logic
- β Optionally, use ArgoCD for managing Jobnik deployments
helm install jobnik ./helm/jobnik
- 1 replica
- RBAC enabled
- Resource requests: 128Mi memory, 64m CPU
- Exposed as a ClusterIP service on port
80
targeting port8080
POST /api/job
Creates and runs a new Kubernetes job based on an existing template.
Field | Type | Required | Description |
---|---|---|---|
jobName |
string | β Yes | Base Job name to clone and trigger |
namespace |
string | β Yes | Namespace where job will run |
envVars |
object (string) | β No | Key-value pairs of environment variables |
args |
array of strings | β No | Command-line arguments to pass to job |
curl -X POST http://localhost:8080/api/job -H "Content-Type: application/json" -d '{
"jobName": "test-job",
"namespace": "default",
"envVars": {
"LOG_LEVEL": "debug",
"TIMEOUT": "30s"
},
"args": ["--export", "--dry-run"]
}'
{
"message": "Job test-job-run-1712345678-1234 triggered successfully",
"jobName": "test-job-run-1712345678-1234",
"namespace": "default"
}
GET /api/jobs
Fetches jobs with optional pagination and namespace filtering.
Field | Type | Required | Description |
---|---|---|---|
namespace |
string | β No | Namespace to list jobs from (or all ) |
limit |
int | β No | Max number of jobs to return (default: 10) |
offset |
int | β No | Pagination offset (default: 0) |
curl "http://localhost:8080/api/jobs?namespace=default&limit=5&offset=0"
X-Total-Count: 25
X-Limit: 5
X-Offset: 0
{
"total": 25,
"limit": 5,
"offset": 0,
"count": 5,
"jobs": [
{
"name": "test-job-run-1712345678-0001",
"status": "succeeded"
}
]
}
GET /api/job/logs
Fetch logs from a pod belonging to a Kubernetes job.
Field | Type | Required | Description |
---|---|---|---|
jobName |
string | β Yes | Name of the job |
namespace |
string | β No | Namespace (default: default ) |
container |
string | β No | Container name (if job has multiple) |
curl "http://localhost:8080/api/job/logs?jobName=test-job-run-1712345678&namespace=default"
{
"jobName": "test-job-run-1712345678",
"namespace": "default",
"logs": "Starting job...
Step 1 done...
Job completed."
}
Replace job-service.default.svc.cluster.local
with your actual service name.
import requests
url = "http://job-service.default.svc.cluster.local:8080/api/job"
payload = {
"jobName": "test-job",
"namespace": "default",
"envVars": {
"LOG_LEVEL": "debug"
},
"args": ["--export"]
}
headers = {"Content-Type": "application/json"}
response = requests.post(url, json=payload, headers=headers)
print(response.json())
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
)
func main() {
data := map[string]interface{}{
"jobName": "test-job",
"namespace": "default",
"envVars": map[string]string{
"LOG_LEVEL": "debug",
},
"args": []string{"--dry-run"},
}
jsonData, _ := json.Marshal(data)
req, _ := http.NewRequest("POST", "http://job-service.default.svc.cluster.local:8080/api/job", bytes.NewBuffer(jsonData))
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, _ := client.Do(req)
fmt.Println("Status:", resp.StatusCode)
}
curl -X POST "http://job-service.default.svc.cluster.local:8080/api/job" -H "Content-Type: application/json" -d '{
"jobName": "test-job",
"namespace": "default",
"envVars": {
"LOG_LEVEL": "debug"
},
"args": ["--dry-run"]
}'
- Make sure the base job you're cloning is pre-created in the target namespace.
- Jobs are monitored in the background and deleted 30 seconds after successful completion.
- You can extend Jobnik to support more features like retries, notifications, or parallel jobs.