This project leverages Red Hat build of Quarkus 3.15.x, the Supersonic Subatomic Java Framework. More specifically, the project is implemented using Red Hat build of Apache Camel v4.8.x for Quarkus.
The purpose is to demo the implementation of the Infinispan Idempotent Repository to synchronize concurrent access as well as the use of the Apache Camel Quarkus Infinispan extension.
The following REST endpoints are exposed:
/api/v1/fruits-and-legumes-api/fruits
:GET
returns a list of hard-coded and added fruits.POST
adds a fruit in the list of fruits.
/api/v1/fruits-and-legumes-api/legumes
:- Only
GET
method is supported. Returns a list of hard-coded legumes
- Only
/api/v1/fruits-and-legumes-api/openapi.json
: returns the OpenAPI 3.0 specification for the Fruits and Legumes API service./api/v1/minio-file-uploader-service/csv
:- Only
POST
method is supported. Uploads the fruits.csv file to MinIO server.
- Only
/api/v1/minio-file-uploader-service/json
:- Only
POST
method is supported. Uploads the fruits.json file to MinIO server.
- Only
/api/v1/minio-file-uploader-service/xml
:- Only
POST
method is supported. Uploads the fruits.xml file to MinIO server.
- Only
/api/v1/minio-file-uploader-service/openapi.json
: returns the OpenAPI 3.0 specification for the MinIO File Uploader service./q/health
: returns the Camel Quarkus MicroProfile health checks/q/metrics
: the Camel Quarkus Micrometer metrics in prometheus format
-
Maven 3.8.1+
-
JDK 21 installed with
JAVA_HOME
configured appropriately -
A running Red Hat OpenShift 4 cluster
-
A running Red Hat Data Grid v8.3 cluster.
NOTE: The
config/datagrid
folder contains OpenShift Cache Custom Resources to be created. For instance, the following command line would create thefruits-legumes-replicated-cache
andidempotency-replicated-cache
replicated caches if the Red Hat Data Grid cluster is deployed in thedatagrid-cluster
namespace:oc -n datagrid-cluster apply -f ./config/datagrid
fruits-legumes-replicated-cache-definition
:fruits-legumes-replicated-cache
used by theFruitsAndLegumesAPI
.idempotency-replicated-cache-definition
:idempotency-replicated-cache
used for idempotency purposes by theFilePollerRoute
.
-
A running MinIO server to provide object storage used by the idempotent consumer route.
NOTE: The
config/minio
folder contains resources to deploy a simple MinIO server in theceq-services-jvm
namespace on OpenShift.- You can run a simple MinIO server locally in container with the following
podman
instructions:- Create the podman volume to persist MinIO data:
podman volume create minio-data
- Run the MinIO container:
podman run -d --name minio \ -p 9000:9000 \ -p 9090:9090 \ -v minio-data:/data \ -e "MINIO_ROOT_USER=minioadmin" \ -e "MINIO_ROOT_PASSWORD=d-XT,YJ.XF3c_WT[" \ quay.io/minio/minio server /data --console-address ":9090"
- The MinIO administration web console is then available at http://localhost:9090/login
- The MinIO API endpoint is also available at http://localhost:9000
- Create the podman volume to persist MinIO data:
- You can run a simple MinIO server locally in container with the following
-
A truststore containing the Red Hat Data Grid v8.3 server public certificate. Below are sample command lines to generate one:
# Use the Java cacerts as the basis for the truststore cp ${JAVA_HOME}/lib/security/cacerts ./tls-keys/truststore.p12 keytool -storepasswd -keystore ./tls-keys/truststore.p12 -storepass changeit -new 'P@ssw0rd' # Importing the Red Hat Data Grid server public certificate into the truststore keytool -importcert -trustcacerts -keystore ./tls-keys/truststore.p12 -file ./tls-keys/rhdg.fullchain.pem -storepass P@ssw0rd -v -noprompt
💡 Example on how to obtain the Red Hat Data Grid server public certificate:
openssl s_client -showcerts -servername <Red Hat Data Grid cluster OpenShift route> -connect <Red Hat Data Grid cluster OpenShift route>:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'
with
<Red Hat Data Grid cluster OpenShift route>
: OpenShift route hostname for the Red Hat Data Grid cluster. E.g.:datagrid.apps.ocp4.jnyilimb.eu
You can run your application in dev mode that enables live coding using:
./mvnw clean compile quarkus:dev
NOTE: Quarkus now ships with a Dev UI, which is available in dev mode only at http://localhost:8080/q/dev/.
The application can be packaged using:
./mvnw package
It produces the quarkus-run.jar
file in the target/quarkus-app/
directory.
Be aware that it’s not an über-jar as the dependencies are copied into the target/quarkus-app/lib/
directory.
The application is now runnable using:
java -Dquarkus.kubernetes-config.enabled=false -jar target/quarkus-app/quarkus-run.jar
If you want to build an über-jar, execute the following command:
./mvnw package -Dquarkus.package.type=uber-jar
The application, packaged as an über-jar, is now runnable using:
java -Dquarkus.kubernetes-config.enabled=false -jar target/*-runner.jar`
You can create a native executable using:
./mvnw package -Pnative
Or, if you don't have GraalVM installed, you can run the native executable build in a container using:
./mvnw package -Pnative -Dquarkus.native.container-build=true
You can then execute your native executable with: ./target/camel-quarkus-datagrid-tester-1.0.0-runner
If you want to learn more about building native executables, please consult https://quarkus.io/guides/maven-tooling.
-
The
fruits-legumes-replicated-cache
andidempotency-replicated-cache
caches have been created in the Red Hat Data Grid cluster.NOTE: The
config/datagrid
folder contains OpenShift Cache Custom Resources to be created. For instance, the following command line would create thefruits-legumes-replicated-cache
andidempotency-replicated-cache
replicated caches if the Red Hat Data Grid cluster is deployed in thedatagrid-cluster
namespace:oc -n datagrid-cluster apply -f ./config/datagrid
-
The MinIO server
-
Login to the OpenShift cluster:
oc login ...
-
Create an OpenShift project to host the service:
oc new-project ceq-services-jvm --display-name="Red Hat build of Apache Camel for Quarkus Apps - JVM Mode"
-
Deploy the simple MinIO server if not already deployed:
oc apply -f ./config/minio
-
Create secret containing the camel-quarkus-datagrid-tester truststore
a. With custom certificates
✅ USE THIS
oc create secret generic camel-quarkus-datagrid-tester-truststore-secret --from-file=./tls-keys/truststore.p12
b. OPTIONAL: With OpenShift signed certificates
💡 THIS IS FOR INFORMATION PURPOSES ONLY
oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data "tls.crt"}}' | openssl base64 -d -A > ./tls-keys/server.crt # Use the Java cacerts as the basis for the truststore cp ${JAVA_HOME}/lib/security/cacerts ./tls-keys/truststore.p12 keytool -storepasswd -keystore ./tls-keys/truststore.p12 -storepass changeit -new 'P@ssw0rd' # Importing the OpenShift signing service certificate into the truststore keytool -importcert -keystore ./tls-keys/truststore.p12 -storepass 'P@ssw0rd' -file ./tls-keys/server.crt -trustcacerts -noprompt # Create camel-quarkus-datagrid-tester-truststore-secret oc create secret generic camel-quarkus-datagrid-tester-truststore-secret --from-file=./tls-keys/truststore.p12
-
Deploy the CEQ service
./mvnw clean package -Dquarkus.openshift.deploy=true -Dquarkus.container-image.group=ceq-services-jvm
Jaeger, a distributed tracing system for observability (open tracing).
💡 A simple way of starting a Jaeger tracing server is with docker
or podman
:
- Start the Jaeger tracing server:
podman run --rm -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -e COLLECTOR_OTLP_ENABLED=true \ -p 6831:6831/udp -p 6832:6832/udp \ -p 5778:5778 -p 16686:16686 -p 4317:4317 -p 4318:4318 -p 14250:14250 -p 14268:14268 -p 14269:14269 -p 9411:9411 \ quay.io/jaegertracing/all-in-one:latest
- While the server is running, browse to http://localhost:16686 to view tracing events.
-
If not already installed, install the Red Hat OpenShift distributed tracing platform (Jaeger) operator with an AllNamespaces scope.
⚠️ cluster-admin privileges are requiredoc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: jaeger-product namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: jaeger-product source: redhat-operators sourceNamespace: openshift-marketplace EOF
-
Verify the successful installation of the Red Hat OpenShift distributed tracing platform operator
watch oc get sub,csv
-
Create the allInOne Jaeger instance in the dsna-pilot OpenShift project
oc apply -f - <<EOF apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory spec: allInOne: options: log-level: info strategy: allInOne EOF
- OpenShift (guide): Generate OpenShift resources from annotations
- Camel Minio (guide): Store and retrieve objects from Minio Storage Service using Minio SDK
- Camel Platform HTTP (guide): Expose HTTP endpoints using the HTTP server available in the current platform
- Camel Direct (guide): Call another endpoint from the same Camel Context synchronously
- Camel Jackson (guide): Marshal POJOs to JSON and back using Jackson
- YAML Configuration (guide): Use YAML to configure your Quarkus application
- RESTEasy JAX-RS (guide): REST endpoint framework implementing JAX-RS and more
- Camel Bean (guide): Invoke methods of Java beans
- Camel OpenTelemetry (guide): Distributed tracing using OpenTelemetry
- Camel MicroProfile Health (guide): Expose Camel health checks via MicroProfile Health
- Camel Micrometer (guide): Collect various metrics directly from Camel routes using the Micrometer library
- Camel Timer (guide): Generate messages in specified intervals using java.util.Timer
- Camel Infinispan (guide): Read and write from/to Infinispan distributed key/value store and data grid
- Kubernetes Config (guide): Read runtime configuration from Kubernetes ConfigMaps and Secrets
- Camel Rest (guide): Expose REST services and their OpenAPI Specification or call external REST services
Configure your application with YAML
The Quarkus application configuration is located in src/main/resources/application.yml
.
- A running Red Hat 3scale API Management v2.13 platform and Red Hat SSO 7.6 instance to secure the API.
- The 3scale Toolbox CLI installed.
-
The following 3scale Toolbox command line imports the API in Red Hat 3scale API Management and secures it using OpenID Connect from the OpenAPI Specification. Red Hat SSO 7 is used as the OpenID Connect Authorization Server.
💡 NOTE: Adapt the values according to your environment.
3scale import openapi \ --override-private-base-url='http://camel-quarkus-datagrid-tester.ceq-services-jvm.svc' \ --oidc-issuer-type=keycloak \ --oidc-issuer-endpoint='https://<replace_me_with_client_id>:<replace_me_with_client_secret>@sso.apps.cluster-njb6f.njb6f.sandbox2810.opentlc.com/auth/realms/openshift-cluster' \ --target_system_name=fruits_and_legumes_api \ --verbose -d rhpds-apim-demo ./src/main/resources/openapi/openapi.json
-
The following command lines create the application plans and update the policy chain respectively using the resources in the
config/threescale
folder.-
Application plans:
### Basic plan 3scale application-plan import \ --file=./config/threescale/application_plans/basic-plan.yaml \ --verbose rhpds-apim-demo fruits_and_legumes_api ### Premium plan 3scale application-plan import \ --file=./config/threescale/application_plans/premium-plan.yaml \ --verbose rhpds-apim-demo fruits_and_legumes_api
-
Policy chain:
3scale policies import \ --file='./config/threescale/policies/policy_chain.yaml' \ --verbose rhpds-apim-demo fruits_and_legumes_api
-
Promotion of the new configuration to 3scale staging and production environments:
## Promote the APIcast configuration to the Staging Environment 3scale proxy deploy rhpds-apim-demo fruits_and_legumes_api --verbose ## Promote latest staging Proxy Configuration to the production environment 3scale proxy-config promote rhpds-apim-demo fruits_and_legumes_api --verbose
-