This project is the top layer to my webserver. It features a reverse proxy that routes all the requests via an auth request to the expected projects, a backend that handles the auth requests and a small UI for a small overview and a login screen.
Also, there is a whole elastic stack behind it with multiple agents monitoring and observing the system. At this point it can be tracked what every single client does on my servers. (A client is a set of devices, and a device is an ip address and an agent)
-
n
projects behind one domain, with subdomains or locations - optionally require login to access the project
- actually secure login (? 🙃)
- SQL injection protection
- XSS protection
- secure JWT
- secure hash for password store
- MFA
- no session fixation
- secure to CSRF
- ABAC
- protection against brute force
- store who accessed the pages
- attempt to track a single user across multiple devices (when on my servers)
- visualize who accessed the pages (kibana)
- implement mini dashboard on the admin page on the host site
- logging, that makes performance and quality reviews easy (kibana)
- send informative mails to the admin and the users
- (close to) completely stateless backend
- oAuth solution to enable central user management accross all projects
- nginx
- docker
- postgres
- go + gin + logrus
- sveltekit
- elastic, kibana, filebeat, fleet, agents, apm
When developing, it helps to have a reverse proxy setup so that everything can be tested. To make this process easy, I have created the dev-env dir.
docker compose up -d -f ./dev-env/docker-compose.yml
subdomains
To use subdomains in localhost (on mac) I had to modify the /etc/hosts
file. I added lines like this:
127.0.0.1 michu-tech-dev.net
127.0.0.1 host.michu-tech-dev.net
127.0.0.1 host.backend.michu-tech-dev.net
127.0.0.1 teachu.michu-tech-dev.net
127.0.0.1 room-automation.michu-tech-dev.net
127.0.0.1 kibana.michu-tech-dev.net
k6 is a test application that helps with testing truly parallel requests.
This needs to be installed on the developers machine, otherwise the tests in ./backend-k6-test/script.js
won't run.
The file ./db/init.sql
creates the DB schema.
The file ./dev-proxy/test-data.sql
inserts some test pages, so that you can test.
A host page must exist. (even the host page has an auth request)
Using Elasicsearch, Kibana, Fleet and so on to keep an overview of my system.
Here is a list of things that are in use on my elastic stack.
- Nginx Metrics and Logs
- Postgres Metrics
- Docker Metrics
- Docker Container logs
- since I have a lot of projects that just run in a docker container, it will be very usefull to see the all the logs without having to SSH onto the server.
- Backend with APM
- Requests should be logged
- Errors should be reported
- (maybe also log DB Queries, Postgres integration might be enough)
- Agent on Raspberry PI
- Get metrics and maybe logs from my raspberry pi
- Helpful Kibana dashboards
- Security or error alerts
The backend can usually be started in the IDE or with go run main.go
.
In production the app will run in a docker container. To test the container run the following.
The docker container uses the config-docker.yml config!
docker build --tag deployment_controller_dev_backend ./backend
docker run -p 8080:8080 --name dp_crtl_be deployment_controller_dev_backend
The backend can be configured in the config.yml file in its root directory. When running in production, this config can be overwritten by the environment variables. The environment variables should be written in uppercase camel case.
Currently, the backend writes its logs to a JSON file. The JSON structure matches elastic formatting. This makes further analysis in kibana rather easy. Filebeat is the tool that keeps track of the logfile from the backend and sends the new lines to elasticsearch.
The frontend can be started and built with the following command.
Make sure that a .env file exists with the PUBLIC_BACKEND_URL
config.
# run the dev builds
npm run dev
# compile for production
npm run build