Compare commits
No commits in common. "bb355942111d95f847fcecd255b5d01f32ff5c9c" and "ae5dfbd2105611cfebbebd5e701c05e7d9468fb1" have entirely different histories.
bb35594211
...
ae5dfbd210
57 changed files with 273 additions and 5958 deletions
18
CLAUDE.md
18
CLAUDE.md
|
|
@ -1,18 +0,0 @@
|
|||
# Projektregeln für Claude Code
|
||||
|
||||
## Team-Workflow
|
||||
|
||||
Aufgaben werden immer an das Subagenten-Team delegiert, nie direkt erledigt.
|
||||
Rollen und Modelle: siehe Memory (`project_team_setup.md`).
|
||||
|
||||
## Dokumentation
|
||||
|
||||
Nach jeder Implementierung muss die betroffene Dokumentation **im gleichen Commit** aktualisiert werden:
|
||||
|
||||
- `docs/SCHEMA.md` — bei Datenbankänderungen
|
||||
- `docs/API-ENDPOINTS.md` — bei neuen oder geänderten Routen
|
||||
- `docs/SERVER-KONZEPT.md` — bei Architektur- oder Konzeptänderungen
|
||||
- `server/backend/README.md` — bei neuen Packages, Endpunkten oder Konfigurationsvariablen
|
||||
- `DEVELOPMENT.md` — bei neuen Env-Variablen oder Entwicklungsvoraussetzungen
|
||||
|
||||
Doris ist verantwortlich für die Doku-Pflege. Sie wird bei jeder Phase automatisch eingesetzt.
|
||||
|
|
@ -40,9 +40,9 @@ Bereits vorhanden:
|
|||
|
||||
Noch nicht vorhanden:
|
||||
|
||||
- admin-seitige Benutzerautentifizierung und Zugriffskontrolle
|
||||
- Multi-Tenancy-Isolation auf API-Ebene
|
||||
- produktives SSL/TLS-Handling fuer Deployment
|
||||
- Docker-Secret-Integration fuer `MORZ_INFOBOARD_ADMIN_PASSWORD`
|
||||
- Ansible-Variable `morz_admin_password` als Vault-Variable (Phase 6)
|
||||
|
||||
## Voraussetzungen auf dem Entwicklungsrechner
|
||||
|
||||
|
|
@ -118,28 +118,6 @@ Hinweis:
|
|||
|
||||
- auf dem aktuellen System dieser Session sind `make` und `go` nicht installiert; die Befehle sind fuer den Entwicklungsrechner vorbereitet
|
||||
|
||||
## Lokale Entwicklung mit Login
|
||||
|
||||
Seit der Implementierung des Tenant-Features ist das Backend durch eine Session-basierte Authentifizierung geschuetzt. Fuer den lokalen Entwicklungsbetrieb muessen zwei zusaetzliche Umgebungsvariablen gesetzt werden:
|
||||
|
||||
- `MORZ_INFOBOARD_ADMIN_PASSWORD` – legt das Passwort des initialen Admin-Users fest. Beim Backend-Start wird automatisch ein User `admin` angelegt (bzw. dessen Passwort aktualisiert), der dem Standard-Tenant `morz` zugeordnet ist. Bleibt die Variable leer, wird kein Admin angelegt und der Login-Bereich ist nicht nutzbar.
|
||||
- `MORZ_INFOBOARD_DEV_MODE` – setzt das Session-Cookie ohne das `Secure`-Flag, sodass er auch ueber unverschluesseltes HTTP (lokales `localhost`) uebertragen wird. Ohne dieses Flag wird der Cookie nur ueber HTTPS gesetzt und der Login schlaegt im lokalen Betrieb still fehl.
|
||||
|
||||
Empfohlener Start fuer die lokale Entwicklung:
|
||||
|
||||
```bash
|
||||
cd server/backend
|
||||
MORZ_INFOBOARD_ADMIN_PASSWORD=dev \
|
||||
MORZ_INFOBOARD_DEV_MODE=true \
|
||||
go run ./cmd/api
|
||||
```
|
||||
|
||||
Danach ist der Login unter `http://localhost:8080/login` mit `admin` / `dev` erreichbar.
|
||||
|
||||
Hinweis: `MORZ_INFOBOARD_DEV_MODE=true` darf niemals in einer produktiven Umgebung gesetzt werden, da der Cookie dort ausschliesslich ueber HTTPS uebertragen werden soll.
|
||||
|
||||
---
|
||||
|
||||
## Lokaler Start
|
||||
|
||||
### Backend lokal starten
|
||||
|
|
@ -159,15 +137,6 @@ Konfigurierbar ueber:
|
|||
|
||||
- `MORZ_INFOBOARD_HTTP_ADDR` – HTTP-Adresse (Standard: `:8080`)
|
||||
- `MORZ_INFOBOARD_STATUS_STORE_PATH` – Pfad zur JSON-Datei fuer persistenten Status-Store; leer lassen fuer reinen In-Memory-Betrieb
|
||||
- `MORZ_INFOBOARD_ADMIN_PASSWORD` – Passwort fuer den initialen Admin-User (leer = kein EnsureAdminUser-Lauf)
|
||||
- `MORZ_INFOBOARD_DEFAULT_TENANT` – Slug des Standard-Tenants, dem der Admin-User zugeordnet wird (Standard: `morz`)
|
||||
- `MORZ_INFOBOARD_REGISTER_SECRET` – Pre-Shared-Secret fuer POST /api/v1/screens/register; leer = offen fuer alle
|
||||
- `MORZ_INFOBOARD_DEV_MODE` – wenn `true`: Session-Cookie wird ohne `Secure`-Flag gesetzt (nur fuer lokale Entwicklung)
|
||||
|
||||
**Hinweis zu `users.role`:**
|
||||
- `admin` — hat Zugriff auf alle Admin-Funktionen und Screens
|
||||
- `screen_user` — hat Zugriff nur auf Screens, fuer die explizit ein Eintrag in `user_screen_permissions` existiert
|
||||
- `tenant` — hat Zugriff auf alle Screens seines Tenants (veraltet, noch nicht vollstaendig implementiert)
|
||||
|
||||
Beispiele:
|
||||
|
||||
|
|
@ -200,8 +169,6 @@ Optional:
|
|||
|
||||
- `MORZ_INFOBOARD_MQTT_USERNAME` – MQTT-Benutzername
|
||||
- `MORZ_INFOBOARD_MQTT_PASSWORD` – MQTT-Passwort
|
||||
- `MORZ_INFOBOARD_REGISTER_SECRET` – Pre-Shared-Secret fuer Selbstregistrierung; muss mit Server-Konfiguration uebereinstimmen
|
||||
- `MORZ_INFOBOARD_SCREENSHOT_EVERY` – Intervall fuer periodische Screenshots in Sekunden (z.B. `300` fuer 5 Minuten; 0 oder leer = deaktiviert)
|
||||
- `MORZ_INFOBOARD_CONFIG=/etc/signage/config.json` – dateibasierte Konfiguration
|
||||
|
||||
Eine Beispielkonfiguration liegt in `player/config/config.example.json`.
|
||||
|
|
@ -295,15 +262,11 @@ Das Playbook erledigt:
|
|||
|
||||
## Empfohlene naechste Implementierungsschritte
|
||||
|
||||
Die Punkte 1–4 der urspruenglichen Liste (Fehlerformat, Routing, Status, MQTT) sind umgesetzt.
|
||||
Offene Punkte aus Phase 6 des Tenant-Feature-Plans (`docs/TENANT-FEATURE-PLAN.md`):
|
||||
|
||||
1. Docker-Secret fuer `MORZ_INFOBOARD_ADMIN_PASSWORD` in `compose/` einrichten
|
||||
2. Ansible-Variable `morz_admin_password` als Vault-Variable definieren
|
||||
3. Code-Review durch Larry (SQL-Injection, Session-Fixation, bcrypt-Cost, Middleware-Reihenfolge)
|
||||
4. End-to-End-Test-Checkliste in `docs/TEST-CHECKLIST-DEV.md` durchlaufen
|
||||
5. Deployment: Image bauen, Migration `002_auth.sql` pruefen, Logs kontrollieren
|
||||
6. Langfristig: Netzwerk-, Sync- und Kommandopfade produktionsnah ausbauen
|
||||
1. Backend: einheitliches Fehlerformat und Routing-Grundstruktur anlegen
|
||||
2. Backend: Konfigurations- und App-Lifecycle stabilisieren
|
||||
3. Agent und Backend: den HTTP-Statuspfad als Grundlage fuer Identitaet, Persistenz und spaetere Admin-Vorschau erweitern
|
||||
4. Agent: danach MQTT-spezifische Reachability und feinere Connectivity-Schwellenlogik aufsetzen
|
||||
5. Danach die Netzwerk-, Sync- und Kommandopfade schrittweise produktionsnah ausbauen
|
||||
|
||||
## End-to-End-Entwicklungstest (Backend + Agent)
|
||||
|
||||
|
|
|
|||
64
TODO.md
64
TODO.md
|
|
@ -47,7 +47,7 @@
|
|||
- [x] Verzeichnislayout auf dem Player festlegen
|
||||
- [x] `player-agent` fachlich zuschneiden
|
||||
- [x] `player-ui` fachlich zuschneiden (lokale Kiosk-Seite mit Splash + Sysinfo-Overlay)
|
||||
- [x] Watchdog-Konzept fuer Browser und Agent definieren
|
||||
- [ ] Watchdog-Konzept fuer Browser und Agent definieren
|
||||
- [x] Offline-Overlay-Verhalten spezifizieren
|
||||
- [x] Fehlerbehandlung fuer Web-Inhalte und Timeouts ausarbeiten
|
||||
- [x] Display-Steuerung fuer An/Aus, Rotation und Neustart planen
|
||||
|
|
@ -57,17 +57,17 @@
|
|||
|
||||
- [x] API-Backend fachlich schneiden
|
||||
- [x] Admin-Oberflaeche in Hauptbereiche aufteilen
|
||||
- [x] Firmen-/Monitor-Oberflaeche in Hauptbereiche aufteilen
|
||||
- [x] Firmen-/Tenant-Oberfläche → siehe docs/TENANT-FEATURE-PLAN.md
|
||||
- [ ] Firmen-/Monitor-Oberflaeche in Hauptbereiche aufteilen
|
||||
- [ ] Firmen-/Tenant-Oberfläche → siehe docs/TENANT-FEATURE-PLAN.md
|
||||
- [x] Storage-Konzept fuer Uploads, Cache-Dateien und Screenshots festlegen
|
||||
- [x] Authentifizierungskonzept festlegen
|
||||
- [x] Mandantentrennung im Datenmodell und in den APIs absichern
|
||||
- [x] Logging- und Monitoring-Konzept definieren
|
||||
- [x] Template-Editor fuer globale Kampagnen fachlich schneiden
|
||||
- [x] Aktivierungsoberflaeche fuer saisonale oder temporäre Kampagnen planen
|
||||
- [x] Gruppierung oder Slot-Modell fuer monitoruebergreifende Layouts planen
|
||||
- [ ] Logging- und Monitoring-Konzept definieren
|
||||
- [ ] Template-Editor fuer globale Kampagnen fachlich schneiden
|
||||
- [ ] Aktivierungsoberflaeche fuer saisonale oder temporäre Kampagnen planen
|
||||
- [ ] Gruppierung oder Slot-Modell fuer monitoruebergreifende Layouts planen
|
||||
- [x] Provisionierungs-UI fuer neue Screens fachlich und technisch schneiden
|
||||
- [x] Jobrunner-Konzept fuer Ansible-gestuetzte Erstinstallation planen
|
||||
- [ ] Jobrunner-Konzept fuer Ansible-gestuetzte Erstinstallation planen
|
||||
|
||||
## Phase 5 - Prototyping
|
||||
|
||||
|
|
@ -89,18 +89,18 @@
|
|||
- [x] Docker-Compose-Setup fuer den Server anlegen
|
||||
- [x] systemd-Units fuer den Player erstellen
|
||||
- [x] Chromium-Kiosk-Startskript erstellen
|
||||
- [x] Screenshot-Erzeugung auf dem Player integrieren
|
||||
- [ ] Screenshot-Erzeugung auf dem Player integrieren
|
||||
- [x] Heartbeat- und Statusmeldungen integrieren
|
||||
- [x] MQTT-Playlist-Change-Synchronisation mit Backend-Debounce (2s) und Agent-Debounce (3s) implementiert
|
||||
- [ ] Fehler- und Wiederanlaufverhalten verifizieren
|
||||
|
||||
## Phase 7 - Ansible-Automatisierung
|
||||
|
||||
- [x] Rolle `signage_base` erstellen
|
||||
- [ ] Rolle `signage_base` erstellen
|
||||
- [x] Rolle `signage_player` erstellen
|
||||
- [x] Rolle `signage_display` erstellen
|
||||
- [x] Rolle `signage_server` erstellen
|
||||
- [x] Rolle `signage_provision` erstellen
|
||||
- [ ] Rolle `signage_server` erstellen
|
||||
- [ ] Rolle `signage_provision` erstellen
|
||||
- [x] Inventar-/Variablenmodell fuer mehrere Monitore entwerfen
|
||||
- [x] Screen-spezifische Variablen wie `screen_id`, Rotation und Aufloesung abbilden
|
||||
- [x] Erstinstallation eines neuen Players automatisieren
|
||||
|
|
@ -145,7 +145,7 @@
|
|||
- [x] Screen-Online/Offline-Status in Admin-Tabelle anzeigen (aus /status-Endpoint befuellen)
|
||||
- [x] Playlist-Tabelle in overflow-x Wrapper einwickeln (Responsive auf kleinen Screens)
|
||||
- [x] PDF-Darstellung: Sidebar und Toolbar im Chromium PDF-Viewer ausblenden (URL-Parameter navpanes=0, toolbar=0)
|
||||
- [x] PDF-Darstellung: PDF.js fuer automatisches Seitendurchblaettern integrieren
|
||||
- [ ] PDF-Darstellung: PDF.js fuer automatisches Seitendurchblaettern integrieren
|
||||
|
||||
### Mittlere Prioritaet
|
||||
|
||||
|
|
@ -171,44 +171,6 @@
|
|||
- [x] Fix: /api/startup-token setzt Cache-Control: no-store Header (Server + Client)
|
||||
- [x] Fix: TestAssetsServed Nil-Dereferenz durch tote Goroutine behoben
|
||||
|
||||
## Security & Code-Review (Opus, 2026-03-23)
|
||||
|
||||
### Kritisch — Sicherheitslücken
|
||||
|
||||
- [x] **K2** Tenant-Isolation für `/manage/{screenSlug}/*`: `requireScreenAccess()` in allen manage-Handlern
|
||||
- [x] **K3** `DELETE /api/v1/media/{id}`: Tenant-Check via reqcontext.UserFromContext
|
||||
- [x] **K4** JSON-API Playlist-Routen (`/items`, `/playlists/*/items`, `/order`, `/duration`): `requirePlaylistAccess()` + `GetByItemID()` im Store
|
||||
- [x] **K1** CSRF-Schutz: Double-Submit-Cookie-Pattern (`httpapi/csrf.go`); JS-Injection in alle Templates; Middleware in Router
|
||||
- [x] **K6** `POST /api/v1/screens/register`: Pre-Shared-Secret via `MORZ_INFOBOARD_REGISTER_SECRET` (Header `X-Register-Secret`); Player-Agent sendet Secret mit
|
||||
- [x] **K5** Admin-Passwort aus Log entfernt — nur `[gesetzt]` wird geloggt
|
||||
|
||||
### Wichtig — Robustheit
|
||||
|
||||
- [x] **N5** Directory-Listing auf `/uploads/` deaktiviert via `neuteredFileSystem` (`httpapi/uploads.go`)
|
||||
- [x] **N6** Uploads nach Tenant getrennt: `fileutil.SaveUploadedFile()` legt Dateien in `uploads/{tenantSlug}/` ab
|
||||
- [x] **W1** Race Condition bei `order_index` behoben: atomare Subquery in `AddItem()`
|
||||
- [x] **W2** Graceful Shutdown implementiert: `http.Server.Shutdown()` mit 15s Timeout auf SIGTERM/SIGINT
|
||||
- [x] **W3** Upload mit `http.MaxBytesReader` begrenzt (512 MB) in allen drei Upload-Handlern
|
||||
- [x] **W4** `err.Error()` nicht mehr an den Client — generische Fehlermeldungen, Details serverseitig
|
||||
- [x] **W7** Template-Execution-Errors: `bytes.Buffer`-Rendering, erst bei Erfolg an Client senden (`renderTemplate()`)
|
||||
|
||||
### Verbesserung — Wartbarkeit
|
||||
|
||||
- [ ] **V3** Keine Tests für Auth, Middleware, Tenant-Handler (gesamter Phase-1-5-Code ohne Abdeckung)
|
||||
- [x] **V1** Upload-Logik konsolidiert in `internal/fileutil/fileutil.go` (`SaveUploadedFile`)
|
||||
- [x] **V5** Cookie-Name als Konstante `reqcontext.SessionCookieName` — manage/auth.go und middleware.go nutzen sie
|
||||
- [x] **V6** Strukturiertes Logging: `log/slog` mit JSON-Handler in `main.go`; `app.go` nutzt `slog.Info/slog.Error`
|
||||
- [x] **V7** DB-Pool wird im Graceful-Shutdown-Handler geschlossen (`a.dbPool.Close()`)
|
||||
|
||||
### Nice-to-have — Features
|
||||
|
||||
- [x] **N1** Rate-Limiting auf `/login`: In-Memory Sliding-Window (5 Versuche/Minute pro IP) via `httpapi/ratelimit.go`
|
||||
- [ ] **N2** Passwort-Änderung / Self-Service-Reset
|
||||
- [ ] **N3** Tenant-User-Management im Admin-UI
|
||||
- [ ] **N4** Session-TTL via Config-Variable steuerbar (aktuell hardcoded 8h)
|
||||
|
||||
**Hinweis K6:** `MORZ_INFOBOARD_REGISTER_SECRET` muss in `server/.env` / `docker-compose.yml` und in der Player-Config (`MORZ_INFOBOARD_REGISTER_SECRET` oder `register_secret` in `config.json`) identisch gesetzt werden. Wenn die Variable leer ist, bleibt der Endpoint offen (Rückwärtskompatibilität).
|
||||
|
||||
## Querschnittsthemen
|
||||
|
||||
- [ ] Datensicherung fuer Datenbank und Medien einplanen
|
||||
|
|
|
|||
|
|
@ -5,8 +5,3 @@ all:
|
|||
hosts:
|
||||
info10:
|
||||
info01-dev:
|
||||
signage_servers:
|
||||
hosts:
|
||||
dockerbox:
|
||||
# ansible_host: 10.0.0.70
|
||||
# ansible_user: admin
|
||||
|
|
|
|||
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
signage_user: morz
|
||||
signage_timezone: "Europe/Berlin"
|
||||
|
||||
signage_base_packages:
|
||||
- curl
|
||||
- ca-certificates
|
||||
- rsync
|
||||
- htop
|
||||
- vim-tiny
|
||||
- bash-completion
|
||||
- ntp
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
- name: Restart cron
|
||||
ansible.builtin.systemd:
|
||||
name: cron
|
||||
state: restarted
|
||||
become: true
|
||||
|
||||
- name: Restart journald
|
||||
ansible.builtin.systemd:
|
||||
name: systemd-journald
|
||||
state: restarted
|
||||
become: true
|
||||
|
|
@ -1,55 +0,0 @@
|
|||
---
|
||||
- name: Update apt cache and upgrade installed packages
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
upgrade: dist
|
||||
cache_valid_time: 3600
|
||||
become: true
|
||||
|
||||
- name: Install base packages
|
||||
ansible.builtin.apt:
|
||||
name: "{{ signage_base_packages }}"
|
||||
state: present
|
||||
become: true
|
||||
|
||||
- name: Set system timezone
|
||||
community.general.timezone:
|
||||
name: "{{ signage_timezone }}"
|
||||
become: true
|
||||
notify: Restart cron
|
||||
|
||||
- name: Ensure NTP service is enabled and running
|
||||
ansible.builtin.systemd:
|
||||
name: ntp
|
||||
enabled: true
|
||||
state: started
|
||||
become: true
|
||||
|
||||
- name: Ensure journald drop-in directory exists
|
||||
ansible.builtin.file:
|
||||
path: /etc/systemd/journald.conf.d
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
become: true
|
||||
|
||||
- name: Configure journald volatile storage (RAM only, schont SD-Karte)
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/systemd/journald.conf.d/morz-volatile.conf
|
||||
content: |
|
||||
[Journal]
|
||||
Storage=volatile
|
||||
RuntimeMaxUse=20M
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
become: true
|
||||
notify: Restart journald
|
||||
|
||||
- name: Ensure signage user exists
|
||||
ansible.builtin.user:
|
||||
name: "{{ signage_user }}"
|
||||
create_home: true
|
||||
state: present
|
||||
become: true
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
---
|
||||
# Admin token used to authenticate against the server API
|
||||
# Must be overridden in group_vars, host_vars or vault.
|
||||
signage_admin_token: ""
|
||||
|
||||
# Server base URL reachable from the Ansible controller
|
||||
signage_server_base_url: "http://10.0.0.70:8080"
|
||||
|
||||
# SSH public key to deploy to the signage user
|
||||
signage_ssh_public_key: ""
|
||||
|
||||
# User that Ansible should permanently manage (after bootstrapping)
|
||||
signage_user: morz
|
||||
|
||||
# Config dir on the target (shared with signage_player role)
|
||||
signage_config_dir: /etc/signage
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
---
|
||||
# No handlers required for provisioning role.
|
||||
# Handlers are intentionally empty – provisioning tasks are one-shot.
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
---
|
||||
- name: Ensure signage user exists
|
||||
ansible.builtin.user:
|
||||
name: "{{ signage_user }}"
|
||||
create_home: true
|
||||
state: present
|
||||
become: true
|
||||
|
||||
- name: Ensure .ssh directory exists for signage user
|
||||
ansible.builtin.file:
|
||||
path: "/home/{{ signage_user }}/.ssh"
|
||||
state: directory
|
||||
owner: "{{ signage_user }}"
|
||||
group: "{{ signage_user }}"
|
||||
mode: "0700"
|
||||
become: true
|
||||
|
||||
- name: Deploy SSH public key for signage user
|
||||
ansible.builtin.authorized_key:
|
||||
user: "{{ signage_user }}"
|
||||
key: "{{ signage_ssh_public_key }}"
|
||||
state: present
|
||||
become: true
|
||||
when: signage_ssh_public_key | length > 0
|
||||
|
||||
- name: Ensure config directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ signage_config_dir }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
become: true
|
||||
|
||||
- name: Deploy vars.yml template for player config
|
||||
ansible.builtin.template:
|
||||
src: vars.yml.j2
|
||||
dest: "{{ signage_config_dir }}/vars.yml"
|
||||
owner: root
|
||||
group: "{{ signage_user }}"
|
||||
mode: "0640"
|
||||
become: true
|
||||
|
||||
- name: Register screen at server via API
|
||||
ansible.builtin.uri:
|
||||
url: "{{ signage_server_base_url }}/api/v1/screens/register"
|
||||
method: POST
|
||||
body_format: json
|
||||
body:
|
||||
slug: "{{ screen_id }}"
|
||||
name: "{{ screen_name | default(screen_id) }}"
|
||||
orientation: "{{ screen_orientation | default('landscape') }}"
|
||||
headers:
|
||||
Content-Type: application/json
|
||||
status_code: [200, 201]
|
||||
delegate_to: localhost
|
||||
when: screen_id is defined
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
# Managed by Ansible – signage_provision role
|
||||
# Do not edit manually on the device.
|
||||
|
||||
screen_id: "{{ screen_id }}"
|
||||
screen_name: "{{ screen_name | default(screen_id) }}"
|
||||
screen_orientation: "{{ screen_orientation | default('landscape') }}"
|
||||
|
||||
morz_server_base_url: "{{ morz_server_base_url | default(signage_server_base_url) }}"
|
||||
morz_mqtt_broker: "{{ morz_mqtt_broker | default('') }}"
|
||||
morz_mqtt_username: "{{ morz_mqtt_username | default('') }}"
|
||||
morz_mqtt_password: "{{ morz_mqtt_password | default('') }}"
|
||||
|
||||
morz_heartbeat_every_seconds: {{ morz_heartbeat_every_seconds | default(30) }}
|
||||
morz_status_report_every_seconds: {{ morz_status_report_every_seconds | default(60) }}
|
||||
morz_player_listen_addr: "{{ morz_player_listen_addr | default('127.0.0.1:8090') }}"
|
||||
morz_player_content_url: "{{ morz_player_content_url | default('') }}"
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
---
|
||||
signage_server_deploy_dir: /srv/docker/info-board-neu
|
||||
signage_server_data_dir: /srv/docker/info-board-neu/data
|
||||
|
||||
# Backend
|
||||
morz_http_addr: ":8080"
|
||||
morz_database_url: "postgres://morz_infoboard:morz_infoboard@db:5432/morz_infoboard?sslmode=disable"
|
||||
morz_upload_dir: /app/uploads
|
||||
morz_status_store_path: /app/data/status
|
||||
morz_default_tenant: morz
|
||||
morz_dev_mode: "false"
|
||||
|
||||
# Admin password – must be overridden in group_vars or vault
|
||||
morz_admin_password: ""
|
||||
|
||||
# MQTT
|
||||
morz_mqtt_broker: ""
|
||||
morz_mqtt_username: ""
|
||||
morz_mqtt_password: ""
|
||||
|
||||
# Firewall
|
||||
signage_server_ufw_enabled: true
|
||||
signage_server_ufw_allow_https: true
|
||||
signage_server_ufw_allow_mqtt: true
|
||||
signage_server_mqtt_port: "1883"
|
||||
signage_server_https_port: "443"
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
- name: Restart morz-server stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ signage_server_deploy_dir }}"
|
||||
state: present
|
||||
pull: always
|
||||
become: true
|
||||
|
|
@ -1,130 +0,0 @@
|
|||
---
|
||||
- name: Install Docker dependencies
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- ca-certificates
|
||||
- curl
|
||||
- gnupg
|
||||
state: present
|
||||
update_cache: true
|
||||
become: true
|
||||
|
||||
- name: Create Docker apt keyring directory
|
||||
ansible.builtin.file:
|
||||
path: /etc/apt/keyrings
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
become: true
|
||||
|
||||
- name: Add Docker GPG key
|
||||
ansible.builtin.get_url:
|
||||
url: https://download.docker.com/linux/debian/gpg
|
||||
dest: /etc/apt/keyrings/docker.asc
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
become: true
|
||||
|
||||
- name: Add Docker apt repository
|
||||
ansible.builtin.apt_repository:
|
||||
repo: >-
|
||||
deb [arch={{ ansible_architecture | replace('x86_64', 'amd64') | replace('aarch64', 'arm64') }}
|
||||
signed-by=/etc/apt/keyrings/docker.asc]
|
||||
https://download.docker.com/linux/debian
|
||||
{{ ansible_distribution_release }} stable
|
||||
state: present
|
||||
filename: docker
|
||||
become: true
|
||||
|
||||
- name: Install Docker Engine and Compose plugin
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
- docker-buildx-plugin
|
||||
- docker-compose-plugin
|
||||
state: present
|
||||
update_cache: true
|
||||
become: true
|
||||
|
||||
- name: Ensure Docker service is enabled and running
|
||||
ansible.builtin.systemd:
|
||||
name: docker
|
||||
enabled: true
|
||||
state: started
|
||||
become: true
|
||||
|
||||
- name: Create server deploy directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ signage_server_deploy_dir }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0750"
|
||||
become: true
|
||||
|
||||
- name: Create server data directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ signage_server_data_dir }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0750"
|
||||
become: true
|
||||
|
||||
- name: Create uploads directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ signage_server_deploy_dir }}/uploads"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0750"
|
||||
become: true
|
||||
|
||||
- name: Deploy docker-compose.yml
|
||||
ansible.builtin.template:
|
||||
src: docker-compose.yml.j2
|
||||
dest: "{{ signage_server_deploy_dir }}/docker-compose.yml"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0640"
|
||||
become: true
|
||||
notify: Restart morz-server stack
|
||||
|
||||
- name: Deploy server environment file
|
||||
ansible.builtin.template:
|
||||
src: env.j2
|
||||
dest: "{{ signage_server_deploy_dir }}/.env"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0600"
|
||||
become: true
|
||||
notify: Restart morz-server stack
|
||||
|
||||
- name: Allow HTTPS through ufw
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ signage_server_https_port }}"
|
||||
proto: tcp
|
||||
comment: morz-infoboard HTTPS
|
||||
become: true
|
||||
when: signage_server_ufw_enabled and signage_server_ufw_allow_https
|
||||
|
||||
- name: Allow MQTT through ufw
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ signage_server_mqtt_port }}"
|
||||
proto: tcp
|
||||
comment: morz-infoboard MQTT
|
||||
become: true
|
||||
when: signage_server_ufw_enabled and signage_server_ufw_allow_mqtt
|
||||
|
||||
- name: Pull and start morz-server stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ signage_server_deploy_dir }}"
|
||||
state: present
|
||||
pull: always
|
||||
become: true
|
||||
|
|
@ -1,58 +0,0 @@
|
|||
---
|
||||
# Managed by Ansible – signage_server role
|
||||
# Do not edit manually on the server.
|
||||
|
||||
services:
|
||||
backend:
|
||||
image: git.az-it.net/az/morz-infoboard/backend:latest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:8080"
|
||||
environment:
|
||||
MORZ_INFOBOARD_HTTP_ADDR: "${MORZ_HTTP_ADDR}"
|
||||
MORZ_INFOBOARD_DATABASE_URL: "${MORZ_DATABASE_URL}"
|
||||
MORZ_INFOBOARD_UPLOAD_DIR: /app/uploads
|
||||
MORZ_INFOBOARD_STATUS_STORE_PATH: /app/data/status
|
||||
MORZ_INFOBOARD_MQTT_BROKER: "${MORZ_MQTT_BROKER}"
|
||||
MORZ_INFOBOARD_MQTT_USERNAME: "${MORZ_MQTT_USERNAME}"
|
||||
MORZ_INFOBOARD_MQTT_PASSWORD: "${MORZ_MQTT_PASSWORD}"
|
||||
MORZ_INFOBOARD_ADMIN_PASSWORD: "${MORZ_ADMIN_PASSWORD}"
|
||||
MORZ_INFOBOARD_DEFAULT_TENANT: "${MORZ_DEFAULT_TENANT}"
|
||||
MORZ_INFOBOARD_DEV_MODE: "${MORZ_DEV_MODE}"
|
||||
volumes:
|
||||
- ./uploads:/app/uploads
|
||||
- ./data:/app/data
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
|
||||
db:
|
||||
image: postgres:17-alpine
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_USER: morz_infoboard
|
||||
POSTGRES_PASSWORD: "${MORZ_DB_PASSWORD}"
|
||||
POSTGRES_DB: morz_infoboard
|
||||
volumes:
|
||||
- db_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U morz_infoboard"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
mqtt:
|
||||
image: eclipse-mosquitto:2
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "1883:1883"
|
||||
- "9001:9001"
|
||||
volumes:
|
||||
- ./mosquitto/config:/mosquitto/config:ro
|
||||
- mosquitto_data:/mosquitto/data
|
||||
- mosquitto_log:/mosquitto/log
|
||||
|
||||
volumes:
|
||||
db_data:
|
||||
mosquitto_data:
|
||||
mosquitto_log:
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
# Managed by Ansible – signage_server role
|
||||
# Do not edit manually on the server.
|
||||
|
||||
MORZ_HTTP_ADDR={{ morz_http_addr }}
|
||||
MORZ_DATABASE_URL={{ morz_database_url }}
|
||||
MORZ_DB_PASSWORD={{ morz_db_password | default('morz_infoboard') }}
|
||||
MORZ_UPLOAD_DIR={{ morz_upload_dir }}
|
||||
MORZ_STATUS_STORE_PATH={{ morz_status_store_path }}
|
||||
MORZ_DEFAULT_TENANT={{ morz_default_tenant }}
|
||||
MORZ_DEV_MODE={{ morz_dev_mode }}
|
||||
|
||||
MORZ_ADMIN_PASSWORD={{ morz_admin_password }}
|
||||
|
||||
MORZ_MQTT_BROKER={{ morz_mqtt_broker }}
|
||||
MORZ_MQTT_USERNAME={{ morz_mqtt_username }}
|
||||
MORZ_MQTT_PASSWORD={{ morz_mqtt_password }}
|
||||
|
|
@ -1,33 +1,7 @@
|
|||
---
|
||||
# Provision a fresh player (run once per new screen)
|
||||
- name: Provision new Signage Player
|
||||
hosts: signage_players
|
||||
gather_facts: false
|
||||
tags: [provision]
|
||||
roles:
|
||||
- signage_provision
|
||||
|
||||
# Base system setup for all signage nodes
|
||||
- name: Base setup for Signage Players
|
||||
hosts: signage_players
|
||||
gather_facts: true
|
||||
tags: [base, player]
|
||||
roles:
|
||||
- signage_base
|
||||
|
||||
# Deploy Morz Infoboard Player Agent and Kiosk Display
|
||||
- name: Deploy Morz Infoboard Player Agent
|
||||
hosts: signage_players
|
||||
gather_facts: false
|
||||
tags: [player]
|
||||
roles:
|
||||
- signage_player
|
||||
- signage_display
|
||||
|
||||
# Deploy Morz Infoboard Central Server
|
||||
- name: Deploy Morz Infoboard Central Server
|
||||
hosts: signage_servers
|
||||
gather_facts: true
|
||||
tags: [server]
|
||||
roles:
|
||||
- signage_server
|
||||
|
|
|
|||
|
|
@ -33,9 +33,6 @@ services:
|
|||
MORZ_INFOBOARD_DATABASE_URL: "postgres://morz_infoboard:morz_infoboard@postgres:5432/morz_infoboard?sslmode=disable"
|
||||
MORZ_INFOBOARD_UPLOAD_DIR: "/uploads"
|
||||
MORZ_INFOBOARD_MQTT_BROKER: "tcp://mosquitto:1883"
|
||||
MORZ_INFOBOARD_ADMIN_PASSWORD: "${MORZ_INFOBOARD_ADMIN_PASSWORD}"
|
||||
MORZ_INFOBOARD_DEV_MODE: "${MORZ_INFOBOARD_DEV_MODE:-false}"
|
||||
MORZ_INFOBOARD_DEFAULT_TENANT: "${MORZ_INFOBOARD_DEFAULT_TENANT:-morz}"
|
||||
volumes:
|
||||
- uploads:/uploads
|
||||
depends_on:
|
||||
|
|
|
|||
|
|
@ -507,122 +507,6 @@ Spezialendpoint zur Auflösung von Nachrichten-Wand-Anfragen (noch in Entwicklun
|
|||
|
||||
---
|
||||
|
||||
## Authentifizierung (Web-Formulare)
|
||||
|
||||
Alle Auth-Routen erfordern keine vorherige Authentifizierung.
|
||||
|
||||
### GET /login
|
||||
|
||||
Zeigt das Login-Formular.
|
||||
|
||||
- Wenn ein gueltiges `morz_session`-Cookie vorhanden ist, wird direkt zum jeweiligen Dashboard
|
||||
weitergeleitet (`/admin` fuer Admins, `/tenant/{slug}/dashboard` fuer Tenant-User).
|
||||
|
||||
**Response:** HTML-Seite mit Benutzername/Passwort-Formular und optionaler Flash-Message.
|
||||
|
||||
---
|
||||
|
||||
### POST /login
|
||||
|
||||
Verarbeitet die Login-Eingabe.
|
||||
|
||||
**Request (Form-Encoded):**
|
||||
```
|
||||
username=admin&password=geheim
|
||||
```
|
||||
|
||||
**Verhalten:**
|
||||
- Passwort wird per `bcrypt.CompareHashAndPassword` geprueft
|
||||
- Bei Erfolg wird ein `morz_session`-Cookie gesetzt (HttpOnly, Secure, 24h TTL)
|
||||
- Weiterleitung je nach Rolle: `admin` → `/admin`, `tenant` → `/tenant/{slug}/dashboard`
|
||||
- Bei Fehler: Rueckkehr zur Login-Seite mit Flash-Message
|
||||
|
||||
**Status:**
|
||||
- `303 See Other` — Erfolg, Weiterleitung
|
||||
- `303 See Other` — Fehler, Rueckkehr zur Login-Seite mit `?msg=`
|
||||
|
||||
---
|
||||
|
||||
### POST /logout
|
||||
|
||||
Meldet den aktuellen Benutzer ab.
|
||||
|
||||
**Verhalten:**
|
||||
- Session wird in der DB geloescht (`DeleteSession`)
|
||||
- Cookie wird mit `MaxAge=-1` geloescht
|
||||
- Weiterleitung zu `/login`
|
||||
|
||||
**Status:**
|
||||
- `303 See Other`
|
||||
|
||||
---
|
||||
|
||||
## Tenant Self-Service Dashboard (Web-Formulare)
|
||||
|
||||
Alle Tenant-Routen erfordern `RequireAuth` + `RequireTenantAccess`.
|
||||
Admins koennen auf jeden Tenant zugreifen; Tenant-User nur auf ihren eigenen.
|
||||
|
||||
### GET /tenant/{tenantSlug}/dashboard
|
||||
|
||||
Zeigt das Tenant-Self-Service-Dashboard.
|
||||
|
||||
**Tabs:**
|
||||
- Tab A "Meine Monitore" — Screen-Karten mit Live-Status (via JS-Fetch aus `/api/v1/screens/status`)
|
||||
- Tab B "Mediathek" — Upload-Formular und Dateiliste
|
||||
|
||||
**Query-Parameter:**
|
||||
- `tab=media` — oeffnet direkt Tab B (z. B. nach Upload-Redirect)
|
||||
- `flash=uploaded` / `flash=deleted` — zeigt Erfolgs-Flash-Message
|
||||
|
||||
**Response:** HTML-Seite.
|
||||
|
||||
---
|
||||
|
||||
### POST /tenant/{tenantSlug}/upload
|
||||
|
||||
Laedt ein Medium fuer den Tenant hoch.
|
||||
|
||||
**Request (Multipart Form):**
|
||||
```
|
||||
type: image (oder video, pdf)
|
||||
title: Mein Bild
|
||||
file: <binary data>
|
||||
```
|
||||
|
||||
oder fuer eine Web-URL:
|
||||
```
|
||||
type: web
|
||||
title: Externe Website
|
||||
url: http://example.com
|
||||
```
|
||||
|
||||
**Verhalten:**
|
||||
- Datei wird in `MORZ_INFOBOARD_UPLOAD_DIR` gespeichert
|
||||
- MIME-Typ wird aus dem Upload-Header abgeleitet
|
||||
- Max. Upload-Groesse: 512 MB
|
||||
|
||||
**Status:**
|
||||
- `303 See Other` → `/tenant/{slug}/dashboard?tab=media&flash=uploaded`
|
||||
- `400 Bad Request` — fehlender Typ oder Datei
|
||||
- `404 Not Found` — Tenant nicht vorhanden
|
||||
|
||||
---
|
||||
|
||||
### POST /tenant/{tenantSlug}/media/{mediaId}/delete
|
||||
|
||||
Loescht ein Medien-Asset des Tenants.
|
||||
|
||||
**Verhalten:**
|
||||
- Eigentuemer-Pruefung: `asset.TenantID` muss mit dem Tenant uebereinstimmen
|
||||
- Physische Datei wird geloescht sofern vorhanden
|
||||
|
||||
**Status:**
|
||||
- `303 See Other` → `/tenant/{slug}/dashboard?tab=media&flash=deleted`
|
||||
- `403 Forbidden` — Asset gehoert nicht diesem Tenant
|
||||
- `404 Not Found` — Tenant oder Asset nicht vorhanden
|
||||
|
||||
---
|
||||
|
||||
## Admin UI (Web-Formulare)
|
||||
|
||||
### GET /admin
|
||||
|
|
@ -694,89 +578,6 @@ Rückleitung zur Admin-Seite.
|
|||
|
||||
---
|
||||
|
||||
## Screen-User Management (Admin)
|
||||
|
||||
### POST /admin/users
|
||||
|
||||
Erstellt einen neuen Screen-User für einen Tenant (Admin-Formular).
|
||||
|
||||
**Request-Body (Form-Encoded):**
|
||||
```
|
||||
username=screenuser1&password=geheim
|
||||
```
|
||||
|
||||
**Verhalten:**
|
||||
- Neuer User mit `role = 'screen_user'` wird angelegt
|
||||
- Passwort wird per bcrypt gehasht
|
||||
- User wird dem aktuellen Tenant zugeordnet
|
||||
|
||||
**Status:**
|
||||
- `200 OK` oder `201 Created` — Screen-User erstellt
|
||||
- `400 Bad Request` — Fehlende oder ungültige Parameter, Username bereits vorhanden
|
||||
- `500 Internal Server Error` — DB-Fehler
|
||||
|
||||
Rückleitung zur Admin-Seite.
|
||||
|
||||
---
|
||||
|
||||
### POST /admin/users/{userID}/delete
|
||||
|
||||
Löscht einen Screen-User und alle zugeordneten Screen-Permissions.
|
||||
|
||||
**Verhalten:**
|
||||
- User mit Rolle `screen_user` wird gelöscht
|
||||
- Alle Einträge in `user_screen_permissions` für diesen User werden gelöscht
|
||||
|
||||
**Status:**
|
||||
- `200 OK` — Screen-User gelöscht
|
||||
- `404 Not Found` — User nicht vorhanden oder falscher Typ
|
||||
- `500 Internal Server Error` — DB-Fehler
|
||||
|
||||
Rückleitung zur Admin-Seite.
|
||||
|
||||
---
|
||||
|
||||
### POST /admin/screens/{screenID}/users
|
||||
|
||||
Fügt einen Screen-User zu einem Screen hinzu.
|
||||
|
||||
**Request-Body (Form-Encoded):**
|
||||
```
|
||||
user_id=<userID>
|
||||
```
|
||||
|
||||
**Verhalten:**
|
||||
- Eintrag in `user_screen_permissions` wird erstellt
|
||||
- User muss vom Typ `screen_user` sein
|
||||
- Unique-Constraint verhindert Duplikate
|
||||
|
||||
**Status:**
|
||||
- `200 OK` — User zu Screen hinzugefügt
|
||||
- `400 Bad Request` — Fehlende Parameter, User bereits hinzugefügt
|
||||
- `404 Not Found` — Screen oder User nicht vorhanden
|
||||
- `500 Internal Server Error` — DB-Fehler
|
||||
|
||||
Rückleitung zur Admin-Seite oder zum Screen-Detail.
|
||||
|
||||
---
|
||||
|
||||
### POST /admin/screens/{screenID}/users/{userID}/remove
|
||||
|
||||
Entfernt einen Screen-User von einem Screen.
|
||||
|
||||
**Verhalten:**
|
||||
- Eintrag in `user_screen_permissions` wird gelöscht
|
||||
- User behält seine Existenz; nur die Permission wird entfernt
|
||||
|
||||
**Status:**
|
||||
- `200 OK` — User von Screen entfernt
|
||||
- `404 Not Found` — Screen, User oder Permission nicht vorhanden
|
||||
- `500 Internal Server Error` — DB-Fehler
|
||||
|
||||
Rückleitung zur Admin-Seite oder zum Screen-Detail.
|
||||
|
||||
---
|
||||
|
||||
## Playlist Management UI (Web-Formulare)
|
||||
|
||||
### GET /manage/{screenSlug}
|
||||
|
|
@ -1024,35 +825,8 @@ Typische HTTP-Status:
|
|||
|
||||
---
|
||||
|
||||
## In Vorbereitung (Phase 6 / künftig)
|
||||
|
||||
Die folgenden Endpoints sind derzeit vorbereitet, aber noch nicht vollständig implementiert:
|
||||
|
||||
- `POST /api/v1/player/screenshot` — Upload von Player-Screenshots an den Backend-Server
|
||||
- Wird vom Agent unter `player/agent/internal/screenshot/screenshot.go` mit dem Intervall `MORZ_INFOBOARD_SCREENSHOT_EVERY` aufgerufen
|
||||
- Multipart-Request mit `screen_id`, `screenshot` (Datei), `mime_type`
|
||||
- Benötigt Backend-Handler für Persistierung und/oder Verarbeitung
|
||||
|
||||
---
|
||||
|
||||
## Änderungshistorie
|
||||
|
||||
- **2026-03-23 (Update):** Screen-User Management Endpoints (Doris / Doku-Review)
|
||||
- `POST /admin/users` — Screen-User anlegen
|
||||
- `POST /admin/users/{userID}/delete` — Screen-User löschen
|
||||
- `POST /admin/screens/{screenID}/users` — User zu Screen hinzufügen
|
||||
- `POST /admin/screens/{screenID}/users/{userID}/remove` — User von Screen entfernen
|
||||
- **2026-03-23 (Update):** Security-Enhancements und Upload-Konsolidierung (Doris / Doku-Review)
|
||||
- CSRF-Schutz (Double-Submit-Cookie) in `internal/httpapi/csrf.go`
|
||||
- Rate-Limiting für `/login` in `internal/httpapi/ratelimit.go`
|
||||
- Upload-Logik konsolidiert in `internal/fileutil/fileutil.go` und `internal/httpapi/uploads.go`
|
||||
- Neue Env-Variable `MORZ_INFOBOARD_REGISTER_SECRET` dokumentiert
|
||||
- Screenshot-Modul im Agent vorbereitet mit `MORZ_INFOBOARD_SCREENSHOT_EVERY`
|
||||
- **2026-03-23 (Update):** Auth- und Tenant-Dashboard-Endpoints ergaenzt (Doris / Doku-Review)
|
||||
- `GET /login`, `POST /login`, `POST /logout` dokumentiert
|
||||
- `GET /tenant/{tenantSlug}/dashboard` dokumentiert
|
||||
- `POST /tenant/{tenantSlug}/upload` dokumentiert
|
||||
- `POST /tenant/{tenantSlug}/media/{mediaId}/delete` dokumentiert
|
||||
- **2026-03-23:** Initiale Dokumentation aller HTTP-Endpoints basierend auf Code-Review
|
||||
- Alle Screen-Management-Endpoints dokumentiert
|
||||
- Alle Playlist-Management-Endpoints dokumentiert
|
||||
|
|
|
|||
|
|
@ -1,535 +0,0 @@
|
|||
# Info-Board Neu - Gruppierungs- und Slot-Modell fuer monitoruebergreifende Layouts
|
||||
|
||||
## Ziel
|
||||
|
||||
Dieses Dokument definiert, wie Screens in Gruppen und Slots organisiert werden.
|
||||
|
||||
Gruppen und Slots sind notwendig fuer:
|
||||
|
||||
- **Massenaktionen** — mehrere Screens mit einer Kampagne ansprechen
|
||||
- **Monitorwaende** — Schriftzuege und Layouts auf mehrere Screens verteilen
|
||||
- **zukuenftige Skalierbarkeit** — neue Displays ohne Neustrukturierung hinzufuegen
|
||||
|
||||
Siehe auch `docs/TEMPLATE-KONZEPT.md` fuer Template-Typen, die Gruppen/Slots verwenden.
|
||||
|
||||
## 1. Screen-Gruppen
|
||||
|
||||
### Konzept
|
||||
|
||||
Eine Gruppe ist eine semantische Zusammenfassung mehrerer Screens.
|
||||
|
||||
**Beispiele:**
|
||||
|
||||
- `all` — alle Screens im System
|
||||
- `wall-all` — alle 9 Infowand-Screens
|
||||
- `wall-row-1` — die 3 Screens der ersten Reihe
|
||||
- `wall-row-2` — die 3 Screens der zweiten Reihe
|
||||
- `single-all` — alle Einzelanzeigen (z.B. Vertretungsplan-Displays)
|
||||
- `outdoor` — alle Aussenanzeigetafeln
|
||||
|
||||
### Typen von Gruppen
|
||||
|
||||
#### Physische Gruppen
|
||||
|
||||
Spiegeln die **reale Anordnung** wider:
|
||||
|
||||
- `wall-all` — alle Displays einer Infowand
|
||||
- `wall-row-1`, `wall-row-2`, `wall-row-3` — Reihen einer Wand
|
||||
- `wall-column-1`, `wall-column-2`, `wall-column-3` — Spalten einer Wand
|
||||
|
||||
#### Funktionale Gruppen
|
||||
|
||||
Spiegeln den **Verwendungszweck** wider:
|
||||
|
||||
- `main-hall-all` — alle Displays im Hauptkorridor
|
||||
- `cafeteria-all` — alle Displays in der Kaffeteria
|
||||
- `info-all` — alle Informationsanzeigen
|
||||
|
||||
#### Typen-Gruppen
|
||||
|
||||
Spiegeln das **Geraetemodell** wider:
|
||||
|
||||
- `portrait-all` — alle Displays im Hochformat
|
||||
- `landscape-all` — alle Displays im Querformat
|
||||
- `4k-displays` — nur 4K-Monitore
|
||||
|
||||
#### Tenant-Gruppen (Phase 2)
|
||||
|
||||
Spiegeln die **Mandanten-Zugehoerigkeit** wider:
|
||||
|
||||
- `tenant-xyz-all` — alle Displays fuer Mandant XYZ
|
||||
- `tenant-xyz-public` — nur oeffentliche Displays des Mandants
|
||||
|
||||
### Hierarchische Struktur
|
||||
|
||||
Gruppen koennen verschachtelt sein:
|
||||
|
||||
```
|
||||
all
|
||||
├── wall-all
|
||||
│ ├── wall-row-1
|
||||
│ │ ├── info01
|
||||
│ │ ├── info02
|
||||
│ │ └── info03
|
||||
│ ├── wall-row-2
|
||||
│ │ ├── info04
|
||||
│ │ ├── info05
|
||||
│ │ └── info06
|
||||
│ └── wall-row-3
|
||||
│ ├── info07
|
||||
│ ├── info08
|
||||
│ └── info09
|
||||
├── single-all
|
||||
│ ├── info10 (Vertretungsplan 1)
|
||||
│ └── info11 (Vertretungsplan 2)
|
||||
└── fallback-displays
|
||||
└── [none currently]
|
||||
```
|
||||
|
||||
**Automatische Inferenz:**
|
||||
|
||||
Ein Screen kann in mehreren Gruppen sein:
|
||||
|
||||
```
|
||||
info01:
|
||||
- all
|
||||
- wall-all
|
||||
- wall-row-1
|
||||
- portrait-all
|
||||
- online-displays (automatisch basierend auf Status)
|
||||
```
|
||||
|
||||
## 2. Slot-Modell
|
||||
|
||||
### Konzept
|
||||
|
||||
Slots beschreiben **feste Positionen innerhalb eines Layouts**.
|
||||
|
||||
Sie werden hauptsaechlich fuer `message_wall`-Templates verwendet, um Ausschnitte von Grossmotiven auf einzelne Screens zu verteilen.
|
||||
|
||||
**Beispiel: 3x3 Infowand**
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ [0,0] [0,1] [0,2] │ Slot wall-r1-c1, wall-r1-c2, wall-r1-c3
|
||||
├─────────────────────────────────┤
|
||||
│ [1,0] [1,1] [1,2] │ Slot wall-r2-c1, wall-r2-c2, wall-r2-c3
|
||||
├─────────────────────────────────┤
|
||||
│ [2,0] [2,1] [2,2] │ Slot wall-r3-c1, wall-r3-c2, wall-r3-c3
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Slot-Nomenclatur:**
|
||||
|
||||
- `wall-r{reihe}-c{spalte}` (Zeile/Spalte im 0er-System oder 1er-System)
|
||||
- `wall-slot-{nummer}` (durchnummeriert, z.B. wall-slot-0 bis wall-slot-8)
|
||||
|
||||
### Geometrische Definition
|
||||
|
||||
Fuer jeden Slot wird definiert:
|
||||
|
||||
```json
|
||||
{
|
||||
"slot_id": "wall-r1-c1",
|
||||
"row": 0,
|
||||
"col": 0,
|
||||
"layout_name": "3x3_grid",
|
||||
"crop_x": 0,
|
||||
"crop_y": 0,
|
||||
"crop_width": 640,
|
||||
"crop_height": 1080,
|
||||
"assigned_screen_id": "info01"
|
||||
}
|
||||
```
|
||||
|
||||
Diese Werte sind:
|
||||
|
||||
- **serverseitig generiert** — Admin muss nicht manuell Pixel-Koordinaten eingeben
|
||||
- **automatisch skalierbar** — bei verschiedenen Aufloesungen
|
||||
|
||||
## 3. Datenmodell
|
||||
|
||||
### Tabelle `screen_groups`
|
||||
|
||||
```sql
|
||||
CREATE TABLE screen_groups (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
slug TEXT NOT NULL UNIQUE,
|
||||
name TEXT NOT NULL,
|
||||
description TEXT,
|
||||
group_type TEXT NOT NULL CHECK (group_type IN (
|
||||
'physical', 'functional', 'device_type', 'tenant', 'custom'
|
||||
)),
|
||||
parent_group_id UUID REFERENCES screen_groups(id),
|
||||
active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
**Beispiele:**
|
||||
|
||||
```sql
|
||||
INSERT INTO screen_groups (slug, name, group_type)
|
||||
VALUES
|
||||
('all', 'Alle Screens', 'custom'),
|
||||
('wall-all', 'Infowand - Alle', 'physical'),
|
||||
('wall-row-1', 'Infowand - Reihe 1', 'physical'),
|
||||
('single-all', 'Einzelanzeigen', 'functional'),
|
||||
('portrait-all', 'Hochformat', 'device_type');
|
||||
```
|
||||
|
||||
### Tabelle `screen_group_members`
|
||||
|
||||
```sql
|
||||
CREATE TABLE screen_group_members (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
group_id UUID NOT NULL REFERENCES screen_groups(id) ON DELETE CASCADE,
|
||||
screen_id UUID NOT NULL REFERENCES screens(id) ON DELETE CASCADE,
|
||||
added_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(group_id, screen_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Beispiel:**
|
||||
|
||||
```sql
|
||||
INSERT INTO screen_group_members (group_id, screen_id)
|
||||
SELECT
|
||||
(SELECT id FROM screen_groups WHERE slug = 'wall-row-1'),
|
||||
id
|
||||
FROM screens
|
||||
WHERE slug IN ('info01', 'info02', 'info03');
|
||||
```
|
||||
|
||||
### Tabelle `layout_definitions`
|
||||
|
||||
```sql
|
||||
CREATE TABLE layout_definitions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
slug TEXT NOT NULL UNIQUE,
|
||||
name TEXT NOT NULL,
|
||||
layout_type TEXT NOT NULL CHECK (layout_type IN (
|
||||
'3x3_grid', '2x2_grid', '1x9_row', '9x1_column', 'custom'
|
||||
)),
|
||||
rows INT NOT NULL,
|
||||
cols INT NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
**Beispiel:**
|
||||
|
||||
```sql
|
||||
INSERT INTO layout_definitions (slug, name, layout_type, rows, cols)
|
||||
VALUES ('3x3_infowand', 'Infowand 3x3', '3x3_grid', 3, 3);
|
||||
```
|
||||
|
||||
### Tabelle `layout_slots`
|
||||
|
||||
```sql
|
||||
CREATE TABLE layout_slots (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
layout_id UUID NOT NULL REFERENCES layout_definitions(id) ON DELETE CASCADE,
|
||||
slot_slug TEXT NOT NULL,
|
||||
row INT NOT NULL,
|
||||
col INT NOT NULL,
|
||||
UNIQUE(layout_id, slot_slug)
|
||||
);
|
||||
```
|
||||
|
||||
**Beispiel:**
|
||||
|
||||
```sql
|
||||
INSERT INTO layout_slots (layout_id, slot_slug, row, col)
|
||||
SELECT
|
||||
(SELECT id FROM layout_definitions WHERE slug = '3x3_infowand'),
|
||||
'wall-r' || (r) || '-c' || (c),
|
||||
r - 1, c - 1
|
||||
FROM
|
||||
CROSS JOIN LATERAL (SELECT GENERATE_SERIES(1, 3) AS r)
|
||||
CROSS JOIN LATERAL (SELECT GENERATE_SERIES(1, 3) AS c);
|
||||
```
|
||||
|
||||
### Tabelle `slot_screen_assignments`
|
||||
|
||||
```sql
|
||||
CREATE TABLE slot_screen_assignments (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
layout_id UUID NOT NULL REFERENCES layout_definitions(id),
|
||||
slot_id UUID NOT NULL REFERENCES layout_slots(id) ON DELETE CASCADE,
|
||||
screen_id UUID NOT NULL REFERENCES screens(id),
|
||||
assigned_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(layout_id, slot_id, screen_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Beispiel:**
|
||||
|
||||
```sql
|
||||
-- Zuordnung: Slot wall-r1-c1 → Screen info01 (in 3x3-Layout)
|
||||
INSERT INTO slot_screen_assignments (layout_id, slot_id, screen_id)
|
||||
SELECT
|
||||
l.id,
|
||||
ls.id,
|
||||
s.id
|
||||
FROM
|
||||
layout_definitions l,
|
||||
layout_slots ls,
|
||||
screens s
|
||||
WHERE
|
||||
l.slug = '3x3_infowand'
|
||||
AND ls.layout_id = l.id
|
||||
AND ls.slot_slug = 'wall-r1-c1'
|
||||
AND s.slug = 'info01';
|
||||
```
|
||||
|
||||
## 4. Admin-Verwaltung
|
||||
|
||||
### Gruppen verwalten
|
||||
|
||||
**Seite:** Admin → Gruppen
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Screen-Gruppen │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Gruppe Typ Screens│
|
||||
│────────────────────────────────────────│
|
||||
│ all custom 13 │
|
||||
│ wall-all physical 9 │
|
||||
│ wall-row-1 physical 3 │
|
||||
│ wall-row-2 physical 3 │
|
||||
│ wall-row-3 physical 3 │
|
||||
│ single-all functional 2 │
|
||||
│ portrait-all device_type 12 │
|
||||
│ │
|
||||
│ [+ Neue Gruppe] [Gruppe bearbeiten] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Gruppe erstellen/bearbeiten
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Neue Gruppe │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Name * │
|
||||
│ [ Infowand Reihe 2 __________________ ] │
|
||||
│ slug: wall-row-2 (automatisch) │
|
||||
│ │
|
||||
│ Gruppentyp * │
|
||||
│ ⦿ physical (Wand-Anordnung) │
|
||||
│ ○ functional (Verwendungszweck) │
|
||||
│ ○ device_type (Geraetetyp) │
|
||||
│ ○ tenant (Mandant) │
|
||||
│ ○ custom (benutzerdefiniert) │
|
||||
│ │
|
||||
│ Beschreibung │
|
||||
│ [ Die obere Reihe der Infowand ______ ] │
|
||||
│ │
|
||||
│ Screens hinzufuegen │
|
||||
│ [ Suchfeld: "info" ] │
|
||||
│ □ info01 ← obere Reihe │
|
||||
│ □ info02 ← obere Reihe │
|
||||
│ ☑ info03 ← obere Reihe │
|
||||
│ □ info04 │
|
||||
│ ... (nur unzugeordnete zeigen) │
|
||||
│ │
|
||||
│ Ausgewaehlte Screens │
|
||||
│ info03 (portrait, online) │
|
||||
│ [ + weitere hinzufuegen ] │
|
||||
│ │
|
||||
│ Uebergruppe │
|
||||
│ [Dropdown: all > wall-all] │
|
||||
│ (optional, zur Hierarchie) │
|
||||
│ │
|
||||
│ [Speichern] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Layout-Definition erstellen (fuer Message-Wall)
|
||||
|
||||
**Seite:** Admin → Layouts
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Layout-Definitionen │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Layout-Name Typ Grid Slots│
|
||||
│─────────────────────────────────────────│
|
||||
│ 3x3 Infowand 3x3_grid 3x3 9 │
|
||||
│ Vertretungsplan 2x2_grid 2x2 4 │
|
||||
│ News-Lauf 1x9_row 1x9 9 │
|
||||
│ │
|
||||
│ [+ Neues Layout] [Bearbeiten] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Detailseite eines Layouts:
|
||||
|
||||
```
|
||||
Layout: 3x3 Infowand
|
||||
|
||||
Visualisierung:
|
||||
┌─────────┬─────────┬─────────┐
|
||||
│ Slot 1 │ Slot 2 │ Slot 3 │
|
||||
├─────────┼─────────┼─────────┤
|
||||
│ Slot 4 │ Slot 5 │ Slot 6 │
|
||||
├─────────┼─────────┼─────────┤
|
||||
│ Slot 7 │ Slot 8 │ Slot 9 │
|
||||
└─────────┴─────────┴─────────┘
|
||||
|
||||
Slot-Zuordnungen:
|
||||
Slot 1 (wall-r1-c1) → Screen info01 (portrait, 1920x1080)
|
||||
Slot 2 (wall-r1-c2) → Screen info02 (portrait, 1920x1080)
|
||||
...
|
||||
|
||||
[Screen-Zuordnungen aendernx] [Layout loeschen]
|
||||
```
|
||||
|
||||
## 5. Anwendung in Kampagnen
|
||||
|
||||
### Kampagne auf Gruppe anwenden
|
||||
|
||||
**Beispiel:** Admin aktiviert Weihnachtsmotiv auf `wall-all`:
|
||||
|
||||
```
|
||||
Template: Weihnachtsmotiv 2025 (full_screen_media)
|
||||
|
||||
Zielgruppe auswaehlen:
|
||||
⦿ Alle Screens
|
||||
○ Nach Gruppe:
|
||||
[Dropdown: wall-all ]
|
||||
oder wall-row-1, single-all, ...
|
||||
○ Einzelne Screens
|
||||
|
||||
→ Kampagne wird auf alle 9 Screens in wall-all aktiviert
|
||||
→ Jeder Screen zeigt dasselbe Motiv
|
||||
→ (Portrait/Landscape-Varianten werden serverseitig beruecksichtigt)
|
||||
```
|
||||
|
||||
### Message-Wall-Kampagne mit Slot-Modell
|
||||
|
||||
**Beispiel:** Admin teilt Schriftzug auf Infowand auf:
|
||||
|
||||
```
|
||||
Template: Schriftzug (message_wall)
|
||||
|
||||
Layout: 3x3 Infowand
|
||||
Zielgruppe: wall-all (auto-expandiert zu Slots)
|
||||
|
||||
Gesamte Grafik hochladen oder zeichnen
|
||||
↓
|
||||
System generiert automatisch:
|
||||
- Slot wall-r1-c1 → Ausschnitt x0-640 y0-1080 → Screen info01
|
||||
- Slot wall-r1-c2 → Ausschnitt 640-1280 y0-1080 → Screen info02
|
||||
- Slot wall-r1-c3 → Ausschnitt 1280-1920 y0-1080 → Screen info03
|
||||
- ... (9 Zuweisungen insgesamt)
|
||||
↓
|
||||
Kampagne aktivieren
|
||||
↓
|
||||
Jeder Screen ladet seinen zustaendigen Ausschnitt
|
||||
↓
|
||||
Schriftzug erscheint verteilt ueber alle 9 Screens
|
||||
```
|
||||
|
||||
## 6. Automatische Gruppe-Inferenz
|
||||
|
||||
Der Server kann bestimmte Gruppen automatisch generieren:
|
||||
|
||||
```python
|
||||
# Automatisch generierte Gruppen
|
||||
|
||||
all:
|
||||
- alle Screens im System (manuelle Verwaltung nicht noetig)
|
||||
|
||||
online-all:
|
||||
- alle Screens, die gerade online sind
|
||||
- wird alle 5 Min aktualisiert
|
||||
|
||||
offline-all:
|
||||
- alle Screens, die gerade offline sind
|
||||
|
||||
portrait-all:
|
||||
- alle Screens mit Orientierung = "portrait"
|
||||
|
||||
landscape-all:
|
||||
- alle Screens mit Orientierung = "landscape"
|
||||
|
||||
device_type_*:
|
||||
- fuer jeden konfigurieren Screen-Typ (z.B. device_type_raspberry_pi)
|
||||
|
||||
region_*:
|
||||
- optional: auf Basis von Geo-Daten oder Tags
|
||||
```
|
||||
|
||||
Diese automatischen Gruppen sind **read-only** im Admin-UI, aber voll verwendbar fuer Kampagnen.
|
||||
|
||||
## 7. Beispiel: Neuinstallation einer Infowand
|
||||
|
||||
**Szenario:** Admin installiert neue 3x3-Infowand mit Screens info01-info09.
|
||||
|
||||
**Schritte:**
|
||||
|
||||
1. **Screens anlegen** (via Provisionierungs-UI oder direkt)
|
||||
```
|
||||
info01, info02, ..., info09
|
||||
Alle: Orientierung portrait, Geraetetyp "raspberry_pi"
|
||||
```
|
||||
|
||||
2. **Gruppen anlegen**
|
||||
```
|
||||
screen_groups:
|
||||
- slug: wall-all, name: "Infowand Alle", type: physical
|
||||
- slug: wall-row-1, name: "Infowand Reihe 1", type: physical
|
||||
- slug: wall-row-2, name: "Infowand Reihe 2", type: physical
|
||||
- slug: wall-row-3, name: "Infowand Reihe 3", type: physical
|
||||
```
|
||||
|
||||
3. **Screens den Gruppen zuordnen**
|
||||
```
|
||||
wall-all: info01-info09
|
||||
wall-row-1: info01, info02, info03
|
||||
wall-row-2: info04, info05, info06
|
||||
wall-row-3: info07, info08, info09
|
||||
```
|
||||
|
||||
4. **Layout definieren**
|
||||
```
|
||||
layout_definitions:
|
||||
- slug: 3x3_infowand, rows: 3, cols: 3
|
||||
|
||||
layout_slots:
|
||||
- wall-r1-c1, wall-r1-c2, wall-r1-c3 (row 0)
|
||||
- wall-r2-c1, wall-r2-c2, wall-r2-c3 (row 1)
|
||||
- wall-r3-c1, wall-r3-c2, wall-r3-c3 (row 2)
|
||||
|
||||
slot_screen_assignments:
|
||||
- wall-r1-c1 → info01
|
||||
- wall-r1-c2 → info02
|
||||
- ... (9 gesamt)
|
||||
```
|
||||
|
||||
5. **Kampagnen verwenden**
|
||||
```
|
||||
Template: Schriftzug
|
||||
Zielgruppe: wall-all
|
||||
Layout: 3x3_infowand
|
||||
→ Kampagne kann sofort aktiviert werden
|
||||
```
|
||||
|
||||
## 8. Zusammenfassung
|
||||
|
||||
Das Gruppierungs- und Slot-Modell:
|
||||
|
||||
- **ist flexibel** — physische, funktionale und typen-basierte Gruppen
|
||||
- **ist hierarchisch** — Gruppen koennen Untergruppen enthalten
|
||||
- **ist automatisch** — Gruppen wie "all" und "online-all" werden inferiert
|
||||
- **ist geometrisch** — Slots definieren Layouts fuer verteilte Motive
|
||||
- **ist skalierbar** — neue Screens werden einfach Gruppen zugeordnet
|
||||
- **ist intuitiv** — Admin-UI zeigt Zuordnungen und Vorschauen
|
||||
|
|
@ -1,483 +0,0 @@
|
|||
# Info-Board Neu - Aktivierungsoberflaeche fuer saisonale und temporaere Kampagnen
|
||||
|
||||
## Ziel
|
||||
|
||||
Die Aktivierungsoberflaeche ermoeglicht es dem Admin, Kampagnen zeitlich und gezielt auf Screens auszurollen — sofort oder geplant.
|
||||
|
||||
Dieses Dokument beschreibt:
|
||||
|
||||
- die Aktivierungs-Workflows im Admin-UI
|
||||
- zeitgesteuerte Aktivierung (Scheduler)
|
||||
- Screen-Zuordnung und Vorschau
|
||||
- Status und Kontrolle waehrend der Laufzeit
|
||||
|
||||
Siehe auch `docs/TEMPLATE-EDITOR.md` fuer die Template-Verwaltung und `docs/TEMPLATE-KONZEPT.md` fuer konzeptionelle Grundlagen.
|
||||
|
||||
## 1. Aktivierungs-Workflows
|
||||
|
||||
### Workflow 1 — Schnelle Sofort-Aktivierung
|
||||
|
||||
**Szenario:** Admin hat ein Template und will es sofort starten.
|
||||
|
||||
**Weg:**
|
||||
|
||||
Admin → Templates → [Template] → "Aktivieren"
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Kampagne starten: Weihnachtsmotiv 2025 │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Kampagnen-Name (eindeutig) │
|
||||
│ [ Weihnachten 2025 _________________] │
|
||||
│ Vorschau: morz_campaign_xmas2025 │
|
||||
│ │
|
||||
│ Zielgruppe pruefen │
|
||||
│ aus Template: Alle Screens (13) │
|
||||
│ [Gruppe aendernx] [Screens aendernx] │
|
||||
│ │
|
||||
│ Dauer │
|
||||
│ ⦿ Sofort starten │
|
||||
│ gueltig ab jetzt │
|
||||
│ ○ Geplant starten │
|
||||
│ [Datum/Uhrzeit auswaehlen] │
|
||||
│ │
|
||||
│ Gueltig bis │
|
||||
│ [Datum/Uhrzeit auswaehlen] │
|
||||
│ oder [ ] unbegrenzt │
|
||||
│ │
|
||||
│ Prioritaet gegenueber Playlist │
|
||||
│ [10____________] hoeher = wichtiger │
|
||||
│ Standardwert: 1 │
|
||||
│ │
|
||||
│ Auto-Deaktivierung bei Ablauf? │
|
||||
│ ⦿ Ja, danach Fallback zeigen │
|
||||
│ ○ Nein, manuell deaktivieren │
|
||||
│ │
|
||||
│ Vorschau betroffener Screens │
|
||||
│ [Screenshot-Vorschau mit Kampagnen- │
|
||||
│ Inhalt fuer ausgew. Screens] │
|
||||
│ │
|
||||
│ [Aktivieren] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Aktion:**
|
||||
|
||||
- Server speichert Kampagne mit `active = true`, `valid_from = NOW()`
|
||||
- Server expandiert Zielgruppe in konkrete Screens
|
||||
- Alle betroffenen Screens erhalten MQTT-Signal `playlist-changed` (obwohl Playlist gleich, aber Kampagnen-Prioritaet aendert sich)
|
||||
- Screens synchonisieren und laden neue Kampagnen-Inhalte
|
||||
|
||||
### Workflow 2 — Geplante Aktivierung
|
||||
|
||||
**Szenario:** Admin bereitet eine Kampagne vor, soll aber erst am naechsten Tag 8:00 Uhr starten.
|
||||
|
||||
**Weg:**
|
||||
|
||||
Admin → Templates → [Template] → "Aktivieren" → "Geplant starten"
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Geplante Aktivierung: Ostern 2025 │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Kampagnen-Name │
|
||||
│ [ Ostern_Dekoration_2025 ____________ ] │
|
||||
│ │
|
||||
│ Startdatum und -uhrzeit │
|
||||
│ [2025-04-14] [08:00] [Kalender/Uhr] │
|
||||
│ │
|
||||
│ Enddatum und -uhrzeit (optional) │
|
||||
│ [2025-04-21] [20:00] [Kalender/Uhr] │
|
||||
│ oder [ ] Kein Enddatum │
|
||||
│ │
|
||||
│ Prioritaet │
|
||||
│ [1_____________] │
|
||||
│ │
|
||||
│ Auto-Deaktivierung? │
|
||||
│ ⦿ Ja │
|
||||
│ ○ Nein │
|
||||
│ │
|
||||
│ Status │
|
||||
│ ◯ GEPLANT — wird am 2025-04-14 08:00 │
|
||||
│ aktiviert │
|
||||
│ │
|
||||
│ Erinnerung setzen (optional) │
|
||||
│ [ ] Erinnerungs-Email 1 Tag vorher │
|
||||
│ [ ] Erinnerungs-Email 1 Stunde vorher │
|
||||
│ │
|
||||
│ [Planen & Speichern] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Aktion:**
|
||||
|
||||
- Server speichert Kampagne mit `active = false`, `valid_from = 2025-04-14 08:00`
|
||||
- Server erstellt inneren Scheduler-Job
|
||||
- Admin sieht Kampagne in Liste mit Status "GEPLANT"
|
||||
- Um geplanten Zeitpunkt:
|
||||
- Scheduler setzt `campaigns.active = true`
|
||||
- MQTT-Signal an alle betroffenen Screens
|
||||
- Optionale Erinnerungs-Email an Admin
|
||||
|
||||
### Workflow 3 — Schnelle Deaktivierung
|
||||
|
||||
**Szenario:** Kampagne laeuft, Admin will sie sofort stoppen.
|
||||
|
||||
**Weg:**
|
||||
|
||||
Admin → Kampagnen → [laufende Kampagne] → "Deaktivieren"
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Kampagne deaktivieren? │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Kampagne: Weihnachten 2025 │
|
||||
│ Status: AKTIV seit 2025-12-01 09:00 │
|
||||
│ Betroffene Screens: 13 │
|
||||
│ │
|
||||
│ Aktion: │
|
||||
│ ⦿ Sofort deaktivieren │
|
||||
│ Screens zeigen danach wieder │
|
||||
│ Tenant-Playlist oder Fallback │
|
||||
│ │
|
||||
│ ○ Mit Verzoegerung (Fade-Out) │
|
||||
│ [2 Min] [5 Min] [Uhr auswaehlen] │
|
||||
│ Nuetzlich: Licht dimmen, Musik leiser │
|
||||
│ etc. vor Inhalt-Wechsel │
|
||||
│ │
|
||||
│ [Ja, deaktivieren] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Aktion:**
|
||||
|
||||
- Server setzt `campaigns.active = false`
|
||||
- Server sendet MQTT-Signal an Screens
|
||||
- Screens wechseln sofort (oder mit Verzoegerung) zu Fallback/Playlist
|
||||
- Kampagne verschwindet aus "Aktive Kampagnen"-Liste
|
||||
|
||||
## 2. Zeitplanung und Scheduler
|
||||
|
||||
### Automatisierte Scheduler-Jobs
|
||||
|
||||
Der Server laeuft einen einfachen Scheduler als Goroutine oder als separaten Service.
|
||||
|
||||
```go
|
||||
// Pseudocode
|
||||
type CampaignScheduler interface {
|
||||
RegisterJob(campaignID, activateAt, deactivateAt time.Time)
|
||||
RunScheduler(ctx context.Context)
|
||||
}
|
||||
|
||||
// Beim Starten
|
||||
func init() {
|
||||
scheduler := NewCampaignScheduler()
|
||||
go scheduler.RunScheduler(ctx)
|
||||
}
|
||||
|
||||
// Im Hintergrund
|
||||
func (s *CampaignScheduler) RunScheduler(ctx context.Context) {
|
||||
ticker := time.NewTicker(1 * time.Minute)
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
// Checke alle geplanten Kampagnen
|
||||
campaigns := db.GetScheduledCampaigns()
|
||||
for _, c := range campaigns {
|
||||
if time.Now() >= c.ValidFrom && !c.Active {
|
||||
// Aktiviere die Kampagne
|
||||
s.ActivateCampaign(c.ID)
|
||||
}
|
||||
if c.ValidUntil != nil && time.Now() >= *c.ValidUntil && c.Active {
|
||||
// Deaktiviere die Kampagne
|
||||
s.DeactivateCampaign(c.ID)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Persistenz ueber Restart
|
||||
|
||||
Scheduler-Jobs werden in der Datenbank gespeichert (Spalten `valid_from`, `valid_until`, `active` in `campaigns`-Tabelle).
|
||||
|
||||
Beim Neustart des Servers:
|
||||
|
||||
1. Server laedt alle geplanten/aktiven Kampagnen
|
||||
2. Scheduler prueft bei jedem Takt (1 Min), ob eine Aktivierung/Deaktivierung faellig ist
|
||||
3. Kein Datenverlust, kein komplexes Job-Persisting noetig
|
||||
|
||||
### Erinnerungen und Notifications
|
||||
|
||||
**Optional (Phase 2):**
|
||||
|
||||
- Email-Erinnerung N Stunden vor Aktivierung
|
||||
- Webhook-Notification fuer externe Systeme
|
||||
- In-App-Benachrichtigung im Admin-Dashboard
|
||||
|
||||
## 3. Screen-Zuordnung und Vorschau
|
||||
|
||||
### Interaktive Zielgruppen-Auswahl
|
||||
|
||||
Waehrend der Kampagnen-Erstellung kann der Admin entscheiden, welche Screens betroffen sein sollen.
|
||||
|
||||
```
|
||||
Zielgruppe
|
||||
⦿ Alle Screens
|
||||
○ Nach Gruppe auswaehlen:
|
||||
□ wall-all (9 Screens)
|
||||
□ single-info (2 Screens)
|
||||
□ vertretungsplan-all (2 Screens)
|
||||
○ Einzelne Screens:
|
||||
[ Suchfeld: "info" ]
|
||||
□ info01 (portrait)
|
||||
□ info02 (portrait)
|
||||
☑ info03 (portrait)
|
||||
□ info04 (portrait)
|
||||
...
|
||||
```
|
||||
|
||||
### Rendering-Vorschau
|
||||
|
||||
Admin sieht, wie die Kampagne auf verschiedenen Zielscreens aussieht:
|
||||
|
||||
```
|
||||
Betroffene Screens: 4 ausgew.
|
||||
|
||||
┌─────────────────────────────────────┐
|
||||
│ info01 (portrait, 1920x1080) │
|
||||
│ ┌────────────────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ │ [Kampagnen-Inhalt: Bild] │ │
|
||||
│ │ (Portrait-Assets verwendet) │ │
|
||||
│ │ │ │
|
||||
│ └────────────────────────────────┘ │
|
||||
└─────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────┐
|
||||
│ info05 (landscape, 2560x1440) │
|
||||
│ ┌────────────────────────────────┐ │
|
||||
│ │ [Kampagnen-Inhalt: Bild] │ │
|
||||
│ │ (Landscape-Assets verwendet) │ │
|
||||
│ └────────────────────────────────┘ │
|
||||
└─────────────────────────────────────┘
|
||||
|
||||
[Scrollen um weitere Screens zu sehen]
|
||||
```
|
||||
|
||||
### Live-Uebersicht waehrend Laufzeit
|
||||
|
||||
Wenn eine Kampagne aktiv ist, zeigt das Admin-Dashboard:
|
||||
|
||||
```
|
||||
Kampagne: Weihnachten 2025 einfuehrung
|
||||
Status: AKTIV seit 2025-12-01 09:00
|
||||
|
||||
Betroffene Screens: 13
|
||||
✓ Aktiv angezeigt: 11 (info01-info08, info10, info11, info13)
|
||||
◯ Wartet auf Sync: 1 (info09)
|
||||
✗ Offline: 1 (info12)
|
||||
|
||||
Zuletzt geprueft: vor 30 Sekunden
|
||||
|
||||
[Aktualisieren] [Deaktivieren] [Bearbeiten]
|
||||
```
|
||||
|
||||
## 4. Kampagnen-Verwaltung waehrend Laufzeit
|
||||
|
||||
### Aktive Kampagnen — Haupt-Dashboard
|
||||
|
||||
**Seite:** Admin → Aktive Kampagnen (oder Campaigns)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Aktive Kampagnen │
|
||||
├─────────────────────────────────┤
|
||||
│ │
|
||||
│ Weihnachten 2025 einfuehrung │ ▼
|
||||
│ Template: Weihnachtsmotiv 2025 │
|
||||
│ Aktiv seit: 2025-12-01 09:00 │
|
||||
│ Aktiv bis: 2025-12-26 23:59 │
|
||||
│ Betroffene: 13 Screens │
|
||||
│ Status: ✓ Auf allen Screens ok │
|
||||
│ │
|
||||
│ [Vorschau] [Bearbeiten] │
|
||||
│ [Deaktivieren] │
|
||||
│ │
|
||||
├─────────────────────────────────┤
|
||||
│ │
|
||||
│ Event-Tag 25.03 │
|
||||
│ Template: screen_specific_scene │
|
||||
│ Aktiv seit: 2025-03-25 00:00 │
|
||||
│ Aktiv bis: 2025-03-25 23:59 │
|
||||
│ Betroffene: 4 Screens │
|
||||
│ Status: ◯ 1 Screen offline │
|
||||
│ │
|
||||
│ [Vorschau] [Bearbeiten] │
|
||||
│ [Deaktivieren] │
|
||||
│ │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Geplante Kampagnen
|
||||
|
||||
**Seite:** Admin → Kampagnen (Alle)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Geplante Kampagnen │
|
||||
├─────────────────────────────────┤
|
||||
│ │
|
||||
│ Ostern-Dekoration 2025 │ ▼
|
||||
│ Template: full_screen_media │
|
||||
│ Status: GEPLANT │
|
||||
│ Startet: 2025-04-14 08:00 │
|
||||
│ Endet: 2025-04-21 20:00 │
|
||||
│ Betroffene: 13 Screens │
|
||||
│ Erinnerung: 1 Tag vorher │
|
||||
│ │
|
||||
│ [Vorschau] [Bearbeiten] │
|
||||
│ [Jetzt aktivieren] [Loeschen] │
|
||||
│ │
|
||||
├─────────────────────────────────┤
|
||||
│ │
|
||||
│ Sommer-Kampagne │
|
||||
│ Status: GEPLANT │
|
||||
│ Startet: 2025-06-01 00:00 │
|
||||
│ │
|
||||
│ ... │
|
||||
│ │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Abgelaufene Kampagnen
|
||||
|
||||
**Seite:** Admin → Kampagnen (Archiv)
|
||||
|
||||
```
|
||||
Zeigt inaktive/abgelaufene Kampagnen fuer Audit-Trail.
|
||||
|
||||
[ Kampagne ] Zeitraum Status
|
||||
Ostern 2025 2025-04-14—04-21 Auto-Deaktiviert
|
||||
Karneval 2025-02-28—03-05 Manuell deaktiviert
|
||||
Valentinstag 2025-02-14 Auto-Deaktiviert
|
||||
```
|
||||
|
||||
## 5. Prioritaetsverwaltung
|
||||
|
||||
### Prio-Einstellung pro Kampagne
|
||||
|
||||
```
|
||||
Prioritaet gegenueber Tenant-Playlist
|
||||
┌─────────────────────────────────┐
|
||||
│ Schieber oder Zahlenfeld │
|
||||
│ │
|
||||
│ [|━━━━━━━━━━━| ] 10 │
|
||||
│ 1 5 10 100 │
|
||||
│ │
|
||||
│ Bedeutung: │
|
||||
│ 1 = normale Kampagne │
|
||||
│ 10 = hohe Prioritaet (Standard) │
|
||||
│ 100 = Notfall / absolut wichtig │
|
||||
│ │
|
||||
│ Diese Prioritaet wird ueber │
|
||||
│ alle Tenant-Playlists gestellt │
|
||||
│ (falls mehrere Kampagnen) │
|
||||
│ verwendet die mit hoechster │
|
||||
│ Prioritaet │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Konflikt-Management (mehrere Kampagnen gleichzeitig)
|
||||
|
||||
Falls mehrere Kampagnen fuer denselben Screen aktiv sind:
|
||||
|
||||
1. Sortierende nach Prioritaet (hoechste gewinnt)
|
||||
2. Bei gleicher Prioritaet: nach Start-Zeitstempel (neueste gewinnt)
|
||||
3. Admin sieht im Status-Dashboard einen Warning: "2 Kampagnen fuer info01 aktiv"
|
||||
|
||||
Empfehlung: Admin sollte Zeitraeume von Kampagnen nicht ueberlappen lassen.
|
||||
|
||||
## 6. Fehlerbehandlung
|
||||
|
||||
### Was, wenn ein Screen offline ist?
|
||||
|
||||
```
|
||||
Kampagne wird aktiviert, aber Screen info03 ist gerade offline:
|
||||
|
||||
1. Server weiss, dass info03 Ziel der Kampagne ist
|
||||
2. Server loggt: "Kampagne XYZ kann nicht auf info03 ausgeliefert werden (offline)"
|
||||
3. Info03 hat letzte gueltige Kampagne gecacht
|
||||
4. Sobald info03 wieder online kommt:
|
||||
- Player synchonisiert
|
||||
- Server sagt: "Kampagne XYZ ist aktiv"
|
||||
- Player ladet und rendert
|
||||
5. Status im Dashboard: "info03 — Offline, wird synchronisiert sobald online"
|
||||
```
|
||||
|
||||
### Rollback bei fehlgeschlagener Aktivierung
|
||||
|
||||
Falls eine Kampagne fehlerhaft ist (kaputtes Video, Renderingfehler):
|
||||
|
||||
```
|
||||
1. Screen zeigt Fehler-Overlay
|
||||
2. Admin ist informiert (Status-API zeigt Fehler)
|
||||
3. Admin Aktion 1: Template korrigieren
|
||||
- Fehlerhaftes Asset austauschen
|
||||
- Kampagne aktualisieren
|
||||
- Screens neu synchonisieren
|
||||
4. Admin Aktion 2: Schnelle Deaktivierung
|
||||
- Kampagne abschalten
|
||||
- Fallback/Playlist kehrt zurueck
|
||||
```
|
||||
|
||||
## 7. Datenschutz und Audit
|
||||
|
||||
### Audit-Trail
|
||||
|
||||
Alle Kampagnen-Aenderungen werden protokolliert:
|
||||
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-25T14:22:00Z",
|
||||
"event": "campaign_activated",
|
||||
"campaign_id": "uuid-...",
|
||||
"campaign_name": "Ostern-Dekoration",
|
||||
"triggered_by_user_id": "admin123",
|
||||
"triggered_by_email": "admin@example.com",
|
||||
"details": {
|
||||
"valid_from": "2025-04-14T08:00:00Z",
|
||||
"valid_until": "2025-04-21T20:00:00Z",
|
||||
"target_screens_count": 13
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Diese Logs sind fuer Compliance und Forensik wichtig.
|
||||
|
||||
### Sichtbarkeitsbeschraenkung
|
||||
|
||||
Nur Benutzer mit Admin-Rolle koennen:
|
||||
|
||||
- Kampagnen erstellen/aendernx
|
||||
- Templates bearbeiten
|
||||
- Aktivierung planen
|
||||
|
||||
Tenant-User sehen keine Kampagnen-Verwaltung.
|
||||
|
||||
## 8. Zusammenfassung
|
||||
|
||||
Die Aktivierungsoberflaeche:
|
||||
|
||||
- **ist einsteigerfreundlich** — Multi-Step Formulare mit Vorschau
|
||||
- **unterstuetzt Sofort und Planung** — spontan oder Wochen im Voraus
|
||||
- **ist sichtbar** — Live-Status und Fehler-Reporting
|
||||
- **ist automatisiert** — Scheduler kuemmert sich um Auf-/Abschalten
|
||||
- **ist sicher** — Audit-Trail und Rollback-Moeglichkeiten
|
||||
- **ist robust** — Offline-Screens werden spaeter synchronisiert
|
||||
|
|
@ -1,470 +0,0 @@
|
|||
# Info-Board Neu - Logging- und Monitoring-Konzept
|
||||
|
||||
## Ziel
|
||||
|
||||
Logging und Monitoring geben dem Betriebsteam vollstaendige Transparenz ueber:
|
||||
|
||||
- Verhalten und Fehler auf dem Player
|
||||
- Verhalten und Fehler auf dem Server
|
||||
- Health-Status aller Screens
|
||||
- Netzwerk- und Synchronisierungsprobleme
|
||||
- Kapazitaetsauslastung und Trends
|
||||
|
||||
Das Konzept muss robust gegen Speicherplatz-Engpaesse auf dem Raspberry Pi arbeiten und zentralisiert auf dem Server auswertbar sein.
|
||||
|
||||
## Logging-Architektur
|
||||
|
||||
### Allgemeine Prinzipien
|
||||
|
||||
- **strukturiertes JSON-Logging** — nicht Freitextloggen, sondern strukturierte Felder
|
||||
- **Log-Levels**: `debug`, `info`, `warn`, `error`, `fatal`
|
||||
- **Zentrale Auswertung** — Player loggen lokal und senden auch an Server
|
||||
- **Rotation und Bereinigung** — lokale Logs werden rotiert und komprimiert
|
||||
- **Datenschutz** — keine sensiblen Inhalte (Passwoerter, API-Keys) ins Log
|
||||
|
||||
### Komponenten und ihre Logs
|
||||
|
||||
## 1. Player-Logs
|
||||
|
||||
### Player-Agent
|
||||
|
||||
Der Agent protokolliert:
|
||||
|
||||
- **Startup/Shutdown**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:00Z",
|
||||
"level": "info",
|
||||
"component": "agent",
|
||||
"event": "startup",
|
||||
"config_file": "/etc/signage/config.yml",
|
||||
"screen_id": "info01"
|
||||
}
|
||||
```
|
||||
|
||||
- **Server-Sync**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:05Z",
|
||||
"level": "info",
|
||||
"component": "agent.sync",
|
||||
"event": "sync_complete",
|
||||
"duration_ms": 342,
|
||||
"items_synced": 15,
|
||||
"bytes_downloaded": 4521000
|
||||
}
|
||||
```
|
||||
|
||||
- **MQTT-Ereignisse**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:10Z",
|
||||
"level": "info",
|
||||
"component": "agent.mqtt",
|
||||
"event": "playlist_changed",
|
||||
"source": "mqtt",
|
||||
"cause": "playlist-changed-event"
|
||||
}
|
||||
```
|
||||
|
||||
- **Fehler**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:15Z",
|
||||
"level": "error",
|
||||
"component": "agent.cache",
|
||||
"event": "download_failed",
|
||||
"media_id": "abc123",
|
||||
"url": "https://cdn.example.com/video.mp4",
|
||||
"error": "connection_timeout",
|
||||
"retry_count": 2
|
||||
}
|
||||
```
|
||||
|
||||
- **Watchdog-Ereignisse** (siehe WATCHDOG-KONZEPT.md)
|
||||
|
||||
### Player-UI
|
||||
|
||||
Die lokale Web-App protokolliert:
|
||||
|
||||
- **Item-Wechsel**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:23:00Z",
|
||||
"level": "info",
|
||||
"component": "ui",
|
||||
"event": "item_change",
|
||||
"previous_item": "img-001",
|
||||
"current_item": "video-002",
|
||||
"source": "campaign"
|
||||
}
|
||||
```
|
||||
|
||||
- **Rendering-Fehler**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:23:05Z",
|
||||
"level": "warn",
|
||||
"component": "ui.renderer",
|
||||
"event": "render_failed",
|
||||
"item_id": "url-003",
|
||||
"media_type": "webpage",
|
||||
"error": "load_timeout",
|
||||
"timeout_ms": 10000
|
||||
}
|
||||
```
|
||||
|
||||
- **Overlay-Status-Aenderungen**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:23:10Z",
|
||||
"level": "info",
|
||||
"component": "ui.overlay",
|
||||
"event": "status_change",
|
||||
"old_status": "online",
|
||||
"new_status": "offline",
|
||||
"reason": "broker_connection_lost"
|
||||
}
|
||||
```
|
||||
|
||||
### Chromium
|
||||
|
||||
Der Browser ist schwer zu loggable, aber systemd journal erfasst:
|
||||
|
||||
- Startup und Argumente
|
||||
- Crash-Meldungen
|
||||
- Fehlerrückmeldungen bei Seitenladefehler
|
||||
|
||||
## 2. Server-Logs
|
||||
|
||||
### Backend-API
|
||||
|
||||
Der Server protokolliert:
|
||||
|
||||
- **HTTP-Requests** (strukturiert, nicht kompletter Request-Body)
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:20Z",
|
||||
"level": "info",
|
||||
"component": "server.http",
|
||||
"method": "POST",
|
||||
"path": "/api/v1/screens/info01/playlist",
|
||||
"status": 200,
|
||||
"duration_ms": 34,
|
||||
"user_id": "admin123",
|
||||
"tenant_id": "tenant01"
|
||||
}
|
||||
```
|
||||
|
||||
- **Datenbank-Operationen** (nur bei Debug-Level)
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:25Z",
|
||||
"level": "debug",
|
||||
"component": "server.db",
|
||||
"query": "UPDATE playlists SET updated_at = NOW() WHERE screen_id = $1",
|
||||
"duration_ms": 5,
|
||||
"rows_affected": 1
|
||||
}
|
||||
```
|
||||
|
||||
- **Fehler und Exceptions**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:30Z",
|
||||
"level": "error",
|
||||
"component": "server.api",
|
||||
"event": "media_download_failed",
|
||||
"media_id": "abc123",
|
||||
"reason": "storage_quota_exceeded",
|
||||
"available_bytes": 1024000,
|
||||
"required_bytes": 50000000
|
||||
}
|
||||
```
|
||||
|
||||
- **Admin-Kommandos**
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:35Z",
|
||||
"level": "info",
|
||||
"component": "server.command",
|
||||
"event": "command_sent",
|
||||
"command_type": "restart_player",
|
||||
"target_screen": "info01",
|
||||
"triggered_by_user": "admin123"
|
||||
}
|
||||
```
|
||||
|
||||
### Provisionierungs-Worker
|
||||
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:40Z",
|
||||
"level": "info",
|
||||
"component": "server.provision",
|
||||
"event": "provision_started",
|
||||
"screen_id": "new_display_01",
|
||||
"target_ip": "192.168.1.50",
|
||||
"ansible_playbook": "site.yml"
|
||||
}
|
||||
```
|
||||
|
||||
## Log-Format und Ausgabe
|
||||
|
||||
### Struktur
|
||||
|
||||
Alle Logs folgen diesem Schema:
|
||||
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:00Z", // ISO 8601, UTC
|
||||
"level": "info|warn|error|debug",
|
||||
"component": "agent|ui|server.api|server.db|server.mqtt",
|
||||
"event": "descriptive_name",
|
||||
"screen_id": "info01", // nur auf Player relevant
|
||||
"tenant_id": "tenant01", // nur auf Server relevant
|
||||
"user_id": "user123", // nur auf Server bei Auth-Events
|
||||
"duration_ms": 342, // bei Performance-Events
|
||||
|
||||
// Fehler-spezifische Felder
|
||||
"error": "error_code",
|
||||
"error_message": "readable error",
|
||||
|
||||
// Domain-spezifische Felder
|
||||
"item_id": "...",
|
||||
"media_type": "image|video|pdf|webpage",
|
||||
"source": "campaign|tenant_playlist|fallback",
|
||||
|
||||
// Sonstige beliebige Felder
|
||||
"details": { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### Ausgabeziele
|
||||
|
||||
#### Auf dem Player
|
||||
|
||||
1. **stdout/stderr** mit `log/slog` JSON-Formatter
|
||||
- erfasst von systemd journal
|
||||
- abrufbar via `journalctl`
|
||||
|
||||
2. **Lokale Datei** `/var/log/signage/player.log`
|
||||
- JSON, eine Zeile pro Event
|
||||
- Rotation auf 100 MB, 10 Archive
|
||||
|
||||
3. **Schnelle Fehler** an Server via HTTP-POST
|
||||
- `POST /api/v1/screens/{screenSlug}/log-event`
|
||||
- asynchron, Fehler bei Offline ignoriert
|
||||
- nur `error` und `fatal` Events
|
||||
|
||||
#### Auf dem Server
|
||||
|
||||
1. **stdout/stderr** mit strukturiertem Logging
|
||||
- erfasst von Docker/systemd
|
||||
- abrufbar via `docker logs` oder `journalctl`
|
||||
|
||||
2. **PostgreSQL** (Phase 2+)
|
||||
- wichtige Fehler und Status-Events in Tabelle `logs`
|
||||
- Abfrage-UI im Admin-Dashboard
|
||||
|
||||
3. **Dateispeicher** (Docker Volume)
|
||||
- `/var/log/signage/server.log`
|
||||
- Rotation und Verdichtung durch Container-Orchester
|
||||
|
||||
## Log-Level-Strategie
|
||||
|
||||
### Debug (development)
|
||||
|
||||
- SQL-Queries
|
||||
- HTTP-Request-Details
|
||||
- interner State-Uebergaenge
|
||||
|
||||
Bei Production: `--log-level warn` oder `--log-level info`
|
||||
|
||||
### Info (standard)
|
||||
|
||||
- Startup/Shutdown
|
||||
- erfolgreiche Operationen
|
||||
- Status-Wechsel
|
||||
- Synchronisierungsereignisse
|
||||
|
||||
### Warn (aufmerksamkeit)
|
||||
|
||||
- Timeouts
|
||||
- Retry-Versuche
|
||||
- deprecierte APIs
|
||||
- suboptimale Performance
|
||||
|
||||
### Error (problematisch)
|
||||
|
||||
- gescheiterte HTTP-Requests
|
||||
- Datenbankfehler
|
||||
- fehlende Ressourcen
|
||||
- Auth-Fehler
|
||||
|
||||
### Fatal (kritisch)
|
||||
|
||||
- nicht-wiederherstellbare Fehler
|
||||
- Prozess beendet sich danach
|
||||
|
||||
## Monitoring-Metriken
|
||||
|
||||
### Player-seitig
|
||||
|
||||
Metriken, die der Agent periodisch dem Server meldet:
|
||||
|
||||
```json
|
||||
{
|
||||
"screen_id": "info01",
|
||||
"ts": "2025-03-23T14:25:00Z",
|
||||
"heartbeat": {
|
||||
"uptime_seconds": 86400,
|
||||
"last_sync_at": "2025-03-23T14:24:55Z",
|
||||
"seconds_since_last_sync": 5,
|
||||
"sync_status": "ok|failed|pending",
|
||||
"sync_fail_count_24h": 0
|
||||
},
|
||||
"resources": {
|
||||
"cpu_percent": 25,
|
||||
"memory_percent": 45,
|
||||
"disk_free_mb": 2048,
|
||||
"disk_used_percent": 35
|
||||
},
|
||||
"network": {
|
||||
"broker_connected": true,
|
||||
"server_reachable": true,
|
||||
"ip_addresses": ["192.168.1.10"],
|
||||
"signal_strength_dbm": -55
|
||||
},
|
||||
"playback": {
|
||||
"current_item_id": "img-001",
|
||||
"source": "campaign",
|
||||
"rendering_status": "ok",
|
||||
"seconds_on_current_item": 23
|
||||
},
|
||||
"errors_last_hour": [
|
||||
{
|
||||
"event": "download_failed",
|
||||
"media_id": "video-999",
|
||||
"count": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Uebertragung:** HTTP `POST /api/v1/screens/{screenSlug}/heartbeat` alle 60 Sekunden
|
||||
|
||||
### Server-seitig
|
||||
|
||||
Der Server sammelt und ueberwacht:
|
||||
|
||||
```json
|
||||
{
|
||||
"screen_id": "info01",
|
||||
"status": "online|offline|degraded|error",
|
||||
"last_heartbeat_at": "2025-03-23T14:25:00Z",
|
||||
"seconds_since_last_heartbeat": 0,
|
||||
"heartbeat_interval_sec": 60,
|
||||
"offline_since_sec": null,
|
||||
|
||||
"screenshot": {
|
||||
"latest_at": "2025-03-23T14:25:00Z",
|
||||
"seconds_since_latest": 0
|
||||
},
|
||||
|
||||
"sync": {
|
||||
"latest_at": "2025-03-23T14:24:55Z",
|
||||
"latest_duration_ms": 342,
|
||||
"fail_count_24h": 1,
|
||||
"last_error": null
|
||||
},
|
||||
|
||||
"content": {
|
||||
"current_item": "img-001",
|
||||
"source": "campaign",
|
||||
"campaign_id": "xmas-2025"
|
||||
},
|
||||
|
||||
"performance": {
|
||||
"cpu_avg_percent_1h": 22,
|
||||
"memory_avg_percent_1h": 44,
|
||||
"disk_free_mb": 2048
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Diese Metriken werden in PostgreSQL gespeichert und bilden Basis fuer:
|
||||
|
||||
- Status-Dashboard
|
||||
- Alerts
|
||||
- Trend-Analysen
|
||||
- Kapazitaetsplanung
|
||||
|
||||
## Log-Rotation auf dem Player
|
||||
|
||||
Der Raspberry Pi hat begrenzte Speicherkapazitaet. Log-Rotation muss aggressiv sein:
|
||||
|
||||
```yaml
|
||||
# /etc/logrotate.d/signage
|
||||
|
||||
/var/log/signage/player.log
|
||||
{
|
||||
size 50M
|
||||
rotate 5
|
||||
compress
|
||||
delaycompress
|
||||
missingok
|
||||
notifempty
|
||||
create 0644 root root
|
||||
postrotate
|
||||
systemctl reload signage-agent.service || true
|
||||
endscript
|
||||
}
|
||||
|
||||
/var/log/signage/watchdog.log
|
||||
{
|
||||
size 20M
|
||||
rotate 3
|
||||
compress
|
||||
delaycompress
|
||||
missingok
|
||||
notifempty
|
||||
create 0644 root root
|
||||
}
|
||||
```
|
||||
|
||||
Resultat:
|
||||
- `player.log`: max 50 MB * 5 = 250 MB
|
||||
- `watchdog.log`: max 20 MB * 3 = 60 MB
|
||||
- Komprimierung von alten Logs auf ~10% der urspruenglichen Groesse
|
||||
|
||||
## Alerting-Strategie
|
||||
|
||||
### Kriterien fuer Alerts
|
||||
|
||||
| Bedingung | Severity | Aktion |
|
||||
|---|---|---|
|
||||
| Screen offline > 15 min | High | Email + Dashboard-Alert |
|
||||
| Screen offline > 2h | Critical | Email + SMS |
|
||||
| Sync-Fehlerquote > 50% in 1h | Medium | Email |
|
||||
| Disk Full auf Player | Critical | Email + Stop-Recording |
|
||||
| CPU > 90% fuer 5 min | Medium | Warnung + Analysis |
|
||||
| Provisioning fehlgeschlagen | High | Email an Provisioner |
|
||||
|
||||
### Alert-Kanal (Phase 2)
|
||||
|
||||
1. **Dashboard-Benachrichtigungen** (im Admin-UI sichtbar)
|
||||
2. **Email** an konfigurierte Admin-Adressen
|
||||
3. **Webhook** fuer externe Monitoring-Systeme (Zabbix, Grafana)
|
||||
4. **Server-API** `/api/v1/admin/alerts` fuer Polling
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Das Logging- und Monitoring-Konzept:
|
||||
|
||||
- **ist strukturiert** — JSON, nicht Freitexte
|
||||
- **ist verteilt** — lokal auf Player + zentral auf Server
|
||||
- **ist speicherbewusst** — Rotation und Kompression
|
||||
- **gibt Ueberblick** — Heartbeat + Metriken fuer jeden Screen
|
||||
- **ermoeglicht Diagnose** — detaillierte Logs im Fehlerfall
|
||||
- **skaliert** — Verfahren gilt fuer beliebig viele Player
|
||||
|
|
@ -1,610 +0,0 @@
|
|||
# Info-Board Neu - Jobrunner-Konzept fuer Ansible-gestuetzte Erstinstallation
|
||||
|
||||
## Ziel
|
||||
|
||||
Der Jobrunner fuehrt aus dem Admin-Backend heraus Provisionierungsjobs aus, die ein neues Display technisch in Betrieb nehmen.
|
||||
|
||||
Dieses Dokument beschreibt:
|
||||
|
||||
- wie ein Admin einen neuen Screen aus dem Web-UI provisioniert
|
||||
- wie der Server Ansible-Playbooeke orchestriert
|
||||
- wie der Fortschritt angezeigt wird
|
||||
- Sicherheits- und Fehlerbehandlung
|
||||
|
||||
Grundlagen zur Provisionierungs-Strategie finden sich in `docs/PROVISIONIERUNGSKONZEPT.md`.
|
||||
|
||||
## 1. Provisionierungs-Workflow im Admin-UI
|
||||
|
||||
### Seite: Admin → Screens → Neu
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Neuen Screen provisionieren │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Schritt 1 — Grunddaten │
|
||||
│ │
|
||||
│ Screen-ID / Slug * │
|
||||
│ [ info10 ] │
|
||||
│ (muss eindeutig sein, alphanumerisch) │
|
||||
│ │
|
||||
│ Anzeigename * │
|
||||
│ [ Infowand Bottom-Left ________________ ] │
|
||||
│ │
|
||||
│ Beschreibung │
|
||||
│ [ Neue Infowand Display, pos. 7______ ] │
|
||||
│ │
|
||||
│ Device Type * │
|
||||
│ ⦿ Raspberry Pi 4 │
|
||||
│ ○ Raspberry Pi 5 │
|
||||
│ ○ x86 Linux Kiosk │
|
||||
│ │
|
||||
│ Aufloesung * │
|
||||
│ [1920 x 1080 ] Standard fuer RPi │
|
||||
│ │
|
||||
│ Orientierung * │
|
||||
│ ⦿ portrait (hochkant) │
|
||||
│ ○ landscape (quer) │
|
||||
│ │
|
||||
│ Tenant-Zuordnung │
|
||||
│ [ Dropdown: alle Tenants + "admin" ] │
|
||||
│ │
|
||||
│ [Weiter >] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Schritt 2 — Netzwerk- und SSH-Einstellung
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Schritt 2 — Zugang zur Hardware │
|
||||
│ │
|
||||
│ Ziel-IP-Adresse * │
|
||||
│ [ 192.168.1.50 ] │
|
||||
│ │
|
||||
│ SSH-Port │
|
||||
│ [ 22 ] Standard │
|
||||
│ │
|
||||
│ Bootstrap-Benutzer * │
|
||||
│ ⦿ root │
|
||||
│ ○ pi │
|
||||
│ ○ custom: [ ________________ ] │
|
||||
│ │
|
||||
│ Bootstrap-Authentifizierung * │
|
||||
│ ⦿ Passwort (initial, wird durch Key │
|
||||
│ ersetzt): │
|
||||
│ [ Passwort ____________ ] │
|
||||
│ ○ SSH-Key (nur wenn vorvorhanden): │
|
||||
│ [ Datei auswaehlen ] oder │
|
||||
│ [ PEM-Key einfuegen ] │
|
||||
│ │
|
||||
│ Test-Verbindung │
|
||||
│ [SSH Test] [PING Test] │
|
||||
│ │
|
||||
│ [Weiter >] [Zurueck] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Schritt 3 — Konfiguration und Optionen
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Schritt 3 — Konfiguration │
|
||||
│ │
|
||||
│ Fallback-Verzeichnis (lokal auf Player) │
|
||||
│ [ /var/lib/signage/fallback ] │
|
||||
│ │
|
||||
│ Snapshot-Intervall (Sekunden) │
|
||||
│ [ 300 ] 0 = deaktiviert │
|
||||
│ │
|
||||
│ MQTT-Broker-Adresse (Zielserver) │
|
||||
│ [ mqtt.example.com ] auto-gefuellt │
|
||||
│ │
|
||||
│ Server-API-Adresse │
|
||||
│ [ https://signage.example.com/api ] │
|
||||
│ auto-gefuellt │
|
||||
│ │
|
||||
│ Gruppen-Zuordnung (optional) │
|
||||
│ [ Checkboxen: wall-all, wall-row-1 ] │
|
||||
│ │
|
||||
│ Tags / Labels (optional) │
|
||||
│ [ mainfloor, hightrafficarea ] │
|
||||
│ │
|
||||
│ [Weiter >] [Zurueck] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Schritt 4 — Review und Start
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Schritt 4 — Uebersicht & Start │
|
||||
│ │
|
||||
│ Zusammenfassung: │
|
||||
│ │
|
||||
│ Screen: info10 │
|
||||
│ Name: Infowand Bottom-Left │
|
||||
│ Typ: Raspberry Pi 4 │
|
||||
│ IP: 192.168.1.50 │
|
||||
│ Aufloesung: 1920 x 1080 │
|
||||
│ Orientierung: portrait │
|
||||
│ Tenant: admin │
|
||||
│ │
|
||||
│ SSH-Verbindung wird hergestellt... │
|
||||
│ [✓] SSH-Zugang verifiziert │
|
||||
│ [✓] Pfadberechtigungen ok │
|
||||
│ [✓] Speicherplatz ausreichend (15GB) │
|
||||
│ │
|
||||
│ Provisioning-Playbook: │
|
||||
│ [ ] site.yml │
|
||||
│ ├─ signage_base (Packages, Kernel) │
|
||||
│ ├─ signage_display (X11, Chromium) │
|
||||
│ ├─ signage_player (Agent, Config) │
|
||||
│ └─ signage_provision (Setup-Jobs) │
|
||||
│ │
|
||||
│ Warnung: │
|
||||
│ ! Diesen Prozess kann nicht unterbrochen│
|
||||
│ werden. Typische Dauer: 10-15 Min. │
|
||||
│ │
|
||||
│ [Provisioning starten] [Abbrechen] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 2. Provisioning-Job: Serverseitige Orchestrierung
|
||||
|
||||
### Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Admin-UI HTTP Request │
|
||||
│ POST /api/v1/admin/provision │
|
||||
└────────────┬────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Backend API (Go) │
|
||||
│ - validiert Eingaben │
|
||||
│ - erstellt ProvisioningJob in DB │
|
||||
│ - queued Job in Job-Broker (Redis etc) │
|
||||
└────────────┬────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Jobrunner Worker (Goroutine oder │
|
||||
│ separater Go-Service) │
|
||||
│ - laeuft im Server-Container │
|
||||
│ - zeigt Fortschritt via Websocket │
|
||||
└────────────┬────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Ansible Executor │
|
||||
│ ansible-playbook site.yml │
|
||||
│ -i inventory.ini │
|
||||
│ -e vars.yml │
|
||||
└────────────┬────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Target Device (Raspberry Pi) │
|
||||
│ SSH: root@192.168.1.50 │
|
||||
│ - installiert Packages │
|
||||
│ - startet Services │
|
||||
│ - synchonisiert Config │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Provisioning-Job-Modell
|
||||
|
||||
```sql
|
||||
CREATE TABLE provisioning_jobs (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
screen_id UUID NOT NULL REFERENCES screens(id),
|
||||
status TEXT NOT NULL CHECK (status IN (
|
||||
'pending', 'running', 'completed', 'failed'
|
||||
)),
|
||||
started_at TIMESTAMPTZ,
|
||||
completed_at TIMESTAMPTZ,
|
||||
|
||||
-- SSH/Ansible-Details
|
||||
target_ip TEXT NOT NULL,
|
||||
target_port INT NOT NULL DEFAULT 22,
|
||||
target_user TEXT NOT NULL,
|
||||
|
||||
-- Verbrauch von Ressourcen
|
||||
ansible_job_id TEXT, -- Job-ID aus Ansible-Executor
|
||||
|
||||
-- Fehlerbehandlung
|
||||
error_log TEXT, -- bei failure
|
||||
|
||||
created_by_user_id TEXT NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
### Provisioning-Log-Modell
|
||||
|
||||
```sql
|
||||
CREATE TABLE provisioning_logs (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
job_id UUID NOT NULL REFERENCES provisioning_jobs(id) ON DELETE CASCADE,
|
||||
line_number INT NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Quelle des Logs
|
||||
source TEXT NOT NULL CHECK (source IN ('ansible', 'agent', 'system')),
|
||||
level TEXT NOT NULL CHECK (level IN ('info', 'warn', 'error')),
|
||||
|
||||
-- Nachricht
|
||||
message TEXT NOT NULL,
|
||||
|
||||
UNIQUE(job_id, line_number)
|
||||
);
|
||||
```
|
||||
|
||||
## 3. Jobrunner-Implementierung
|
||||
|
||||
### Job-Verarbeitung (Pseudocode)
|
||||
|
||||
```go
|
||||
type ProvisioningJobRunner struct {
|
||||
db *sql.DB
|
||||
ansibleBinPath string
|
||||
logChannel chan ProvisioningLogMessage
|
||||
}
|
||||
|
||||
func (r *ProvisioningJobRunner) ProcessJob(ctx context.Context, jobID uuid.UUID) error {
|
||||
// 1. Lade Job aus DB
|
||||
job := r.db.GetProvisioningJob(jobID)
|
||||
|
||||
// 2. Setze Status auf "running"
|
||||
r.db.UpdateProvisioningJob(job.ID, map[string]interface{}{
|
||||
"status": "running",
|
||||
"started_at": time.Now(),
|
||||
})
|
||||
|
||||
// 3. Generiere Ansible-Inventar
|
||||
inventory := r.generateInventory(job)
|
||||
// [192.168.1.50]
|
||||
// ansible_user=root
|
||||
// ansible_password=***
|
||||
// screen_id=info10
|
||||
// ansible_become=yes
|
||||
|
||||
// 4. Generiere vars.yml
|
||||
vars := r.generateVars(job)
|
||||
// screen_id: info10
|
||||
// display_name: "Infowand Bottom-Left"
|
||||
// orientation: portrait
|
||||
// mqtt_broker: mqtt.example.com
|
||||
// etc.
|
||||
|
||||
// 5. Fuehre Ansible aus
|
||||
cmd := exec.CommandContext(ctx,
|
||||
r.ansibleBinPath,
|
||||
"site.yml",
|
||||
"-i", inventoryPath,
|
||||
"-e", varsPath,
|
||||
"-v", // verbose
|
||||
)
|
||||
|
||||
// 6. Piping: Ansible-Ausgabe → Log-Dateien + Websocket
|
||||
stdout, _ := cmd.StdoutPipe()
|
||||
stderr, _ := cmd.StderrPipe()
|
||||
|
||||
go r.streamLogs(job.ID, stdout, "ansible")
|
||||
go r.streamLogs(job.ID, stderr, "ansible")
|
||||
|
||||
// 7. Warte auf Completion
|
||||
err := cmd.Run()
|
||||
|
||||
// 8. Aktualisiere Job-Status
|
||||
if err != nil {
|
||||
r.db.UpdateProvisioningJob(job.ID, map[string]interface{}{
|
||||
"status": "failed",
|
||||
"completed_at": time.Now(),
|
||||
"error_log": err.Error(),
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
r.db.UpdateProvisioningJob(job.ID, map[string]interface{}{
|
||||
"status": "completed",
|
||||
"completed_at": time.Now(),
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *ProvisioningJobRunner) streamLogs(jobID uuid.UUID, reader io.Reader, source string) {
|
||||
scanner := bufio.NewScanner(reader)
|
||||
lineNum := 1
|
||||
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
|
||||
// Persistiere in DB
|
||||
r.db.InsertProvisioningLog(ProvisioningLog{
|
||||
JobID: jobID,
|
||||
LineNumber: lineNum,
|
||||
Source: source,
|
||||
Level: parseLogLevel(line), // heuristic
|
||||
Message: line,
|
||||
})
|
||||
|
||||
// Schreibe ins Websocket (siehe Abschnitt "Fortschritt")
|
||||
r.logChannel <- ProvisioningLogMessage{
|
||||
JobID: jobID,
|
||||
Line: line,
|
||||
}
|
||||
|
||||
lineNum++
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Ansible-Ausfuehrung mit Jumphost (optional)
|
||||
|
||||
Falls der Server nicht direkt die Zielgeraete erreicht, kann ein Jumphost verwendet werden:
|
||||
|
||||
```yaml
|
||||
# ansible.cfg
|
||||
[defaults]
|
||||
inventory = inventory.ini
|
||||
host_key_checking = False
|
||||
retries = 3
|
||||
|
||||
[privilege_escalation]
|
||||
become = True
|
||||
become_method = sudo
|
||||
```
|
||||
|
||||
```ini
|
||||
# inventory.ini fuer Jumphost-Szenario
|
||||
[targets]
|
||||
192.168.1.50 ansible_user=root ansible_password=*** \
|
||||
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p jumphost@example.com"'
|
||||
```
|
||||
|
||||
## 4. Fortschritt und Live-Updates
|
||||
|
||||
### Websocket-Kanal fuer Echtzeit-Logs
|
||||
|
||||
**HTTP-Upgrade zu Websocket:**
|
||||
|
||||
```
|
||||
GET /api/v1/admin/provision/{jobID}/logs
|
||||
Upgrade: websocket
|
||||
Connection: Upgrade
|
||||
```
|
||||
|
||||
**Server sendet kontinuierlich:**
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "log_line",
|
||||
"timestamp": "2025-03-25T14:22:00Z",
|
||||
"line": "TASK [signage_base : Update package cache] **",
|
||||
"source": "ansible",
|
||||
"level": "info"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "progress",
|
||||
"timestamp": "2025-03-25T14:22:15Z",
|
||||
"current_task": "signage_base : Update package cache",
|
||||
"task_number": 3,
|
||||
"total_tasks": 12,
|
||||
"percent": 25
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "status_change",
|
||||
"timestamp": "2025-03-25T14:35:00Z",
|
||||
"status": "completed",
|
||||
"duration_seconds": 780
|
||||
}
|
||||
```
|
||||
|
||||
### UI-Anzeige waehrend Provisioning
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Provisioning laeuft: info10 │
|
||||
│ Gestartet: vor 5 Min. │
|
||||
│ Geschaetzte verbleibende Zeit: 8 Min. │
|
||||
├──────────────────────────────────────────┤
|
||||
│ │
|
||||
│ [████████████░░░░░░░░░░░░░░] 33% │
|
||||
│ │
|
||||
│ Aktuelle Aufgabe: │
|
||||
│ ⊙ signage_base : Update package cache │
|
||||
│ │
|
||||
│ Letzte Logs: │
|
||||
│ ├─ [14:22:00] TASK [signage_base ...] │
|
||||
│ ├─ [14:22:05] ok: [192.168.1.50] │
|
||||
│ ├─ [14:22:10] TASK [signage_display] │
|
||||
│ ├─ [14:22:15] Chromium wird installiert│
|
||||
│ └─ [14:22:20] ... │
|
||||
│ │
|
||||
│ [Auto-Refresh] [Pause] [Abbrechen] │
|
||||
│ (Abbrechen: SSH-Verbindung wird nicht │
|
||||
│ sofort getrennt, aber Job gestoppt) │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 5. Fehlerbehandlung und Recovery
|
||||
|
||||
### Fehlerszenarien
|
||||
|
||||
| Fehler | Grund | Recovery |
|
||||
|---|---|---|
|
||||
| SSH-Verbindung fehlgeschlagen | IP falsch, Passwort falsch, Firewall | Logs zeigen SSH-Error, Admin kann Credentials korrigieren und neu starten |
|
||||
| Ansible-Playbook fehlgeschlagen | Paket-Versionskonflikt, Platz voll | Logs zeigen welcher Task fehlgeschlagen, Admin kann manuell SSH-en oder Job wiederholen |
|
||||
| Timeout nach 30 Min. | Sehr langsame Netzwerk oder Device haengt | Job wird abgebrochen, Admin kann Verbindung checken und neu starten |
|
||||
| Package-Download fehlgeschlagen | Mirror offline, Netzwerk unterbrochen | Ansible retry automatisch 3x, Logs zeigen wget-Error |
|
||||
|
||||
### Retry-Logik
|
||||
|
||||
```
|
||||
Strategie: Exponentieller Backoff fuer Playbook-Fehler
|
||||
Fehler 1: Sofort wiederholen
|
||||
Fehler 2: Warte 5s, wiederhole
|
||||
Fehler 3: Warte 15s, wiederhole
|
||||
Fehler 4+: Gib auf, zeige Fehler
|
||||
```
|
||||
|
||||
### Admin-Recovery
|
||||
|
||||
Falls ein Job fehlgeschlagen ist:
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Provisioning fehlgeschlagen: info10 │
|
||||
│ │
|
||||
│ Fehler: │
|
||||
│ ssh: Could not resolve hostname │
|
||||
│ (DNS-Fehler oder Geraet nicht erreichbar)│
|
||||
│ │
|
||||
│ Empfehlung: │
|
||||
│ 1. IP-Adresse pruefen │
|
||||
│ 2. Geraet von Hand SSH-en und testen │
|
||||
│ 3. Job neu starten: [Neuer Versuch] │
|
||||
│ │
|
||||
│ Komplette Logs herunterladen: │
|
||||
│ [logs-info10-20250325.txt] │
|
||||
│ │
|
||||
│ [Neuer Versuch] [Logs zeigen] [Zurueck]│
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 6. Sicherheitsaspekte
|
||||
|
||||
### SSH-Key-Verwaltung
|
||||
|
||||
**Phase 1 — Bootstrap mit Passwort:**
|
||||
|
||||
```
|
||||
Admin gibt Passwort ein
|
||||
↓
|
||||
Server speichert Passwort NICHT
|
||||
↓
|
||||
Server uebergibt an Ansible nur waehrend dieser Session
|
||||
↓
|
||||
Ansible loggt sich ein, generiert SSH-Key
|
||||
↓
|
||||
SSH-Key wird auf dem Geraet als authorized_key eingetragen
|
||||
↓
|
||||
Passwort wird auf dem Geraet gelöscht oder deaktiviert
|
||||
```
|
||||
|
||||
**Phase 2 — Dauerhaft mit SSH-Key:**
|
||||
|
||||
```
|
||||
Server speichert SSH-Key in Secrets-Backend (z.B. HashiCorp Vault)
|
||||
Zukuenftige Ansible-Lauefe verwenden den Key
|
||||
```
|
||||
|
||||
### Ansible-Vault fuer sensitive Daten
|
||||
|
||||
```yaml
|
||||
# roles/signage_player/defaults/main.yml
|
||||
server_api_key: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
abcd1234...
|
||||
```
|
||||
|
||||
Die Vault-Passphrase wird:
|
||||
|
||||
- nie im Klartext gelagert
|
||||
- vom Server nur zur Laufzeit an Ansible uebergeben
|
||||
- in Logs nicht ausgegeben
|
||||
|
||||
### Sudo ohne Passwort
|
||||
|
||||
Ansible erhoeht die Rechte per `sudo` ohne Passwort-Eingabe:
|
||||
|
||||
```sudoers
|
||||
# /etc/sudoers.d/ansible-signage
|
||||
ansible ALL=(ALL) NOPASSWD: ALL
|
||||
```
|
||||
|
||||
(Alternativ: mit Passwort, das Ansible am Anfang einmal abfragt)
|
||||
|
||||
## 7. Verbindung zum bestehenden System
|
||||
|
||||
### Provisioning-Trigger aus Admin-UI
|
||||
|
||||
```
|
||||
Admin-Seite: Screens → "+ Neuer Screen"
|
||||
↓
|
||||
Formular sammelt Grunddaten
|
||||
↓
|
||||
POST /api/v1/admin/provision
|
||||
↓
|
||||
Backend:
|
||||
1. Screen in `screens` Tabelle eintragen
|
||||
2. ProvisioningJob in `provisioning_jobs` anlegen
|
||||
3. Job in Broker queuen
|
||||
↓
|
||||
Jobrunner:
|
||||
1. Holt Job aus Broker
|
||||
2. Startet Ansible
|
||||
3. Streamt Logs via Websocket
|
||||
4. Aktualisiert Job-Status bei Completion
|
||||
↓
|
||||
Admin sieht Live-Updates im UI
|
||||
```
|
||||
|
||||
### Nach erfolgreichem Provisioning
|
||||
|
||||
```
|
||||
Job-Status: "completed"
|
||||
↓
|
||||
Agent auf dem Display startet
|
||||
↓
|
||||
Agent registriert sich beim Server
|
||||
↓
|
||||
Server setzt Screen-Status auf "online"
|
||||
↓
|
||||
Admin sieht Screen in Tabelle mit Status "online"
|
||||
↓
|
||||
Admin kann sofort Kampagnen/Playlists zuweisen
|
||||
```
|
||||
|
||||
## 8. Konfigurierbare Parameter
|
||||
|
||||
In `/etc/signage/provision.yml`:
|
||||
|
||||
```yaml
|
||||
jobrunner:
|
||||
max_concurrent_jobs: 3
|
||||
ansible_timeout_sec: 1800
|
||||
playbook_path: "/srv/ansible/site.yml"
|
||||
inventory_template_path: "/srv/ansible/inventory.ini.tpl"
|
||||
vars_template_path: "/srv/ansible/vars.yml.tpl"
|
||||
|
||||
ssh:
|
||||
known_hosts_file: "/etc/signage/.ssh/known_hosts"
|
||||
key_storage: "vault" # oder "filesystem"
|
||||
|
||||
ansible:
|
||||
verbosity: "-vv" # oder "-v", "-vvv"
|
||||
extra_args: ""
|
||||
```
|
||||
|
||||
## 9. Zusammenfassung
|
||||
|
||||
Der Jobrunner:
|
||||
|
||||
- **ist web-gesteuert** — Provisioning-UI mit Multi-Step-Wizard
|
||||
- **ist automatisiert** — Ansible Playbooks, nicht manuelle SSH-Kommandos
|
||||
- **ist transparent** — Live-Logs und Fortschritt-Anzeige
|
||||
- **ist sicher** — SSH-Keys, Ansible-Vault, keine Plaintext-Credentials in Logs
|
||||
- **ist resilient** — Retry-Logik und Error-Recovery
|
||||
- **ist erweiterbar** — neue Rollen und Tasks koennen hinzugefuegt werden ohne UI-Aenderung
|
||||
103
docs/SCHEMA.md
103
docs/SCHEMA.md
|
|
@ -48,73 +48,21 @@ Zweck:
|
|||
Spalten:
|
||||
|
||||
```sql
|
||||
id text primary key default gen_random_uuid()::text
|
||||
tenant_id text not null references tenants(id) on delete cascade
|
||||
username text not null
|
||||
password_hash text not null
|
||||
role text not null default 'tenant'
|
||||
created_at timestamptz not null default now()
|
||||
unique(tenant_id, username)
|
||||
id uuid primary key
|
||||
tenant_id uuid null references tenants(id) on delete set null
|
||||
username text not null unique
|
||||
email text not null unique
|
||||
password_hash text not null
|
||||
role text not null
|
||||
active boolean not null default true
|
||||
last_login_at timestamptz null
|
||||
created_at timestamptz not null
|
||||
updated_at timestamptz not null
|
||||
```
|
||||
|
||||
Regeln:
|
||||
|
||||
- `role` in v1: `admin`, `screen_user`, `tenant`
|
||||
- `username` ist nur innerhalb eines Tenants eindeutig (Unique-Constraint auf `(tenant_id, username)`)
|
||||
- `tenant_id` ist `NOT NULL` — jeder User gehoert genau einem Tenant
|
||||
- IDs sind `text`, nicht `uuid`, enthalten aber UUID-Werte (via `gen_random_uuid()::text`)
|
||||
- Felder wie `email`, `active`, `last_login_at` und `updated_at` existieren in v1 nicht
|
||||
|
||||
### `user_screen_permissions`
|
||||
|
||||
Zweck:
|
||||
|
||||
- Zuordnung von Screen-Usern zu Screens (rollenbasierter Zugriff)
|
||||
|
||||
Spalten:
|
||||
|
||||
```sql
|
||||
id uuid primary key
|
||||
user_id text not null references users(id) on delete cascade
|
||||
screen_id uuid not null references screens(id) on delete cascade
|
||||
created_at timestamptz not null default now()
|
||||
unique(user_id, screen_id)
|
||||
```
|
||||
|
||||
Regeln:
|
||||
|
||||
- `user_id` muss ein User mit `role = 'screen_user'` sein
|
||||
- `screen_id` muss existieren; Loeschen des Screens loescht auch die Permission
|
||||
- Loeschen des Users loescht auch alle seine Permissions
|
||||
|
||||
### `sessions`
|
||||
|
||||
Zweck:
|
||||
|
||||
- Sitzungstokens fuer den Browser-Login
|
||||
|
||||
Spalten:
|
||||
|
||||
```sql
|
||||
id text primary key default gen_random_uuid()::text
|
||||
user_id text not null references users(id) on delete cascade
|
||||
created_at timestamptz not null default now()
|
||||
expires_at timestamptz not null default (now() + interval '8 hours')
|
||||
```
|
||||
|
||||
Indizes:
|
||||
|
||||
```sql
|
||||
create index idx_sessions_user_id on sessions(user_id);
|
||||
create index idx_sessions_expires_at on sessions(expires_at);
|
||||
```
|
||||
|
||||
Regeln:
|
||||
|
||||
- Session-TTL beim Anlegen betraegt standardmaessig 8 Stunden (Migration-Default);
|
||||
`AuthStore.CreateSession` uebergibt die tatsaechliche TTL als Parameter (aktuell 24 Stunden)
|
||||
- Abgelaufene Sessions werden stuendlich per Hintergrund-Ticker bereinigt (`CleanExpiredSessions`)
|
||||
- Cookie-Name: `morz_session`; `HttpOnly=true`, `Secure=true` (ausser `MORZ_INFOBOARD_DEV_MODE=true`)
|
||||
- `role` in v1: `admin`, `tenant_user`
|
||||
|
||||
### `screen_groups`
|
||||
|
||||
|
|
@ -546,35 +494,6 @@ last_failed_sync_at timestamptz null
|
|||
last_error_message text null
|
||||
```
|
||||
|
||||
## Auth-Datenbankschema
|
||||
|
||||
Die Auth-Tabellen werden durch `server/backend/internal/db/migrations/002_auth.sql` angelegt
|
||||
und sind vollstaendig unter den Abschnitten `users` und `sessions` oben beschrieben.
|
||||
|
||||
Die Screen-Usserverwaltung wird durch `server/backend/internal/db/migrations/003_user_screen_permissions.sql` angelegt
|
||||
und ist unter dem Abschnitt `user_screen_permissions` oben beschrieben.
|
||||
|
||||
Der `AuthStore` (`internal/store/auth.go`) stellt folgende Methoden bereit:
|
||||
|
||||
- `GetUserByUsername(ctx, username)` — Nutzer per Username laden (inkl. `TenantSlug` via LEFT JOIN)
|
||||
- `CreateSession(ctx, userID, ttl)` — neue Session anlegen
|
||||
- `GetSessionUser(ctx, sessionID)` — User zu gueltigem Session-Token laden
|
||||
- `DeleteSession(ctx, sessionID)` — Session loeschen (Logout)
|
||||
- `CleanExpiredSessions(ctx)` — abgelaufene Sessions bereinigen
|
||||
- `EnsureAdminUser(ctx, tenantSlug, password)` — Admin-User beim Start anlegen wenn nicht vorhanden
|
||||
- `VerifyPassword(ctx, userID, password)` — Passwort gegen bcrypt-Hash pruefen
|
||||
- `CreateScreenUser(ctx, tenantID, username, password)` — neuen Screen-User anlegen
|
||||
- `ListScreenUsers(ctx, tenantID)` — alle Screen-User eines Tenants auflisten
|
||||
- `DeleteUser(ctx, userID)` — User und alle zugeordneten Permissions loeschen
|
||||
|
||||
Der `ScreenStore` (`internal/store/screen.go`) stellt folgende Methoden bereit:
|
||||
|
||||
- `GetAccessibleScreens(ctx, userID)` — alle Screens, auf die der User Zugriff hat
|
||||
- `HasUserScreenAccess(ctx, userID, screenID)` — prueft ob User auf Screen zugreifen darf
|
||||
- `AddUserToScreen(ctx, userID, screenID)` — User zu Screen hinzufuegen
|
||||
- `RemoveUserFromScreen(ctx, userID, screenID)` — User von Screen entfernen
|
||||
- `GetScreenUsers(ctx, screenID)` — alle User, die auf Screen Zugriff haben
|
||||
|
||||
## Wichtige Indizes
|
||||
|
||||
Empfohlen mindestens:
|
||||
|
|
|
|||
|
|
@ -247,96 +247,6 @@ Sinnvolle Komponenten in `compose/`:
|
|||
- `mosquitto`
|
||||
- optional `worker`
|
||||
|
||||
## Authentifizierung
|
||||
|
||||
Der Server verwendet einen Session-basierten Login-Flow mit `bcrypt`-Passwort-Hashing.
|
||||
|
||||
### Login-Flow
|
||||
|
||||
1. `GET /login` rendert das Login-Formular (Bulma-Card zentriert).
|
||||
2. `POST /login` prueft Username und Passwort:
|
||||
- `AuthStore.GetUserByUsername` laedt den User inkl. Tenant-Slug.
|
||||
- `bcrypt.CompareHashAndPassword` prueft das Passwort (Cost-Faktor 12).
|
||||
- Bei Erfolg legt `AuthStore.CreateSession` eine Session an (TTL 24 Stunden).
|
||||
- Das Session-Token wird als `morz_session`-Cookie gesetzt (`HttpOnly=true`, `Secure=true`).
|
||||
- Im `DevMode` (`MORZ_INFOBOARD_DEV_MODE=true`) wird `Secure=false` gesetzt fuer lokalen HTTP-Betrieb.
|
||||
- Weiterleitung je nach Rolle: `admin` → `/admin`, `tenant` → `/tenant/{slug}/dashboard`.
|
||||
3. `POST /logout` loescht die Session in der DB und entfernt den Cookie.
|
||||
|
||||
### Cookie-Lebensdauer
|
||||
|
||||
- Standard-TTL: 24 Stunden
|
||||
- Der Cookie verfaellt automatisch; die DB wird stuendlich durch `CleanExpiredSessions` bereinigt.
|
||||
|
||||
### Admin-User-Bootstrap
|
||||
|
||||
Beim Server-Start wird `EnsureAdminUser` aufgerufen, wenn `MORZ_INFOBOARD_ADMIN_PASSWORD` gesetzt ist.
|
||||
Der Admin-User wird dem Tenant mit Slug `MORZ_INFOBOARD_DEFAULT_TENANT` (Standard: `morz`) zugeordnet.
|
||||
Ist der User bereits vorhanden, passiert nichts. Fehler sind nicht fatal — der Server startet trotzdem.
|
||||
|
||||
---
|
||||
|
||||
## Middleware-Kette
|
||||
|
||||
Alle geschuetzten Routen werden durch Middleware-Funktionen in `internal/httpapi/middleware.go` abgesichert.
|
||||
|
||||
```
|
||||
Eingehende Anfrage
|
||||
│
|
||||
▼
|
||||
RequireAuth Liest morz_session-Cookie, prueft Session via DB,
|
||||
speichert *store.User im Request-Context.
|
||||
→ Fehler: Redirect zu /login?next=<Pfad>
|
||||
│
|
||||
├─► RequireAdmin Prueft user.Role == "admin"
|
||||
│ → Fehler: 403 Forbidden
|
||||
│
|
||||
└─► RequireTenant Prueft user.TenantSlug == {tenantSlug} aus dem URL-Pfad.
|
||||
Access Admins duerfen immer durch.
|
||||
→ Fehler: 403 Forbidden
|
||||
```
|
||||
|
||||
### Route-Gruppen im Router
|
||||
|
||||
| Gruppe | Middleware | Beispielrouten |
|
||||
|----------------|------------------------------------|---------------------------------------------|
|
||||
| Oeffentlich | keine | `/healthz`, `/login`, `/api/v1/screens/register` |
|
||||
| Auth-only | RequireAuth | `/manage/{screenSlug}/...` |
|
||||
| Admin-only | RequireAuth + RequireAdmin | `/admin`, `/admin/screens/...` |
|
||||
| Tenant-scoped | RequireAuth + RequireTenantAccess | `/tenant/{tenantSlug}/...`, `/api/v1/tenants/{tenantSlug}/...` |
|
||||
|
||||
Der Hilfsfunktion `chain(middlewares...)` in `router.go` wrappet Handler von aussen nach innen.
|
||||
|
||||
---
|
||||
|
||||
## Tenant-Dashboard
|
||||
|
||||
Das Tenant-Self-Service-Dashboard ist unter `/tenant/{tenantSlug}/dashboard` erreichbar.
|
||||
|
||||
### URL-Schema
|
||||
|
||||
| Methode | Pfad | Beschreibung |
|
||||
|---------|---------------------------------------------|---------------------------|
|
||||
| GET | `/tenant/{tenantSlug}/dashboard` | Dashboard rendern |
|
||||
| POST | `/tenant/{tenantSlug}/upload` | Medium hochladen |
|
||||
| POST | `/tenant/{tenantSlug}/media/{mediaId}/delete` | Medium loeschen |
|
||||
|
||||
### Tabs
|
||||
|
||||
- **Tab A – Meine Monitore:** Zeigt Screen-Karten mit Live-Status. Der Status wird per JavaScript
|
||||
aus `GET /api/v1/screens/status` geladen und alle 30 Sekunden aktualisiert.
|
||||
Status-Badge: `is-success` (online), `is-danger` (offline), `is-warning` (unbekannt).
|
||||
- **Tab B – Mediathek:** Upload-Formular (Bild, Video, PDF oder Web-URL) und Dateiliste
|
||||
mit Loeschen-Button. Nach Upload oder Loeschen Redirect mit `?tab=media&flash=uploaded/deleted`.
|
||||
|
||||
### Eigentuemer-Pruefung beim Loeschen
|
||||
|
||||
`HandleTenantDeleteMedia` prueft, dass `asset.TenantID == tenant.ID`, bevor es loescht.
|
||||
Damit ist sichergestellt, dass ein Tenant keine Assets anderer Tenants loeschen kann,
|
||||
selbst wenn er die `mediaId` erraten wuerde.
|
||||
|
||||
---
|
||||
|
||||
## Sicherheitsgrundsaetze
|
||||
|
||||
- Root-Bootstrap-Geheimnisse nur kurzlebig oder referenziert speichern
|
||||
|
|
|
|||
|
|
@ -1,494 +0,0 @@
|
|||
# Info-Board Neu - Template-Editor fuer globale Kampagnen
|
||||
|
||||
## Ziel
|
||||
|
||||
Der Template-Editor ist ein Bereich des Admin-UI fuer die fachliche Erstellung und Verwaltung globaler Templates und deren operativen Aktivierungen als Kampagnen.
|
||||
|
||||
Dieses Dokument definiert:
|
||||
|
||||
- Welche Schritte ein Admin unternimmt, um ein Template zu erstellen
|
||||
- Welche Felder und Optionen der Editor anbietet
|
||||
- Wie Templates zu Kampagnen aktiviert werden
|
||||
- Wie die Abbildung im Datenmodell aussieht
|
||||
|
||||
Grundlagen zu Template-Typen, Slot-Modell und Message-Wall finden sich in `docs/TEMPLATE-KONZEPT.md`.
|
||||
|
||||
## 1. Template-Verwaltung
|
||||
|
||||
### Template-Liste
|
||||
|
||||
**Seite:** Admin → Templates
|
||||
|
||||
**Anzeige:**
|
||||
|
||||
Tabelle mit allen Templates:
|
||||
|
||||
| Name | Typ | Zielgruppe | Szenen | Erstellt | Status |
|
||||
|---|---|---|---|---|---|
|
||||
| Weihnachtsmotiv 2025 | full_screen_media | alle | 1 | 2025-01-15 | draft |
|
||||
| Schriftzug Infowand | message_wall | wall-all | 9 | 2025-02-01 | active |
|
||||
| Event-Tag 25.03 | screen_specific_scene | [info01, info02, ...] | 2 | 2025-03-01 | draft |
|
||||
|
||||
**Aktionen pro Zeile:**
|
||||
|
||||
- "Bearbeiten" — öffnet Template-Editor
|
||||
- "Kopieren" — dupliziert als neue Draft
|
||||
- "Löschen" — nur wenn keine aktiven Kampagnen
|
||||
- "Vorschau" — zeigt Layout (fuer message_wall) oder Asset-Galerien
|
||||
- "Aktivieren" — schneller Weg zu Kampagne starten
|
||||
|
||||
### Template-Editor (Erstellung/Bearbeitung)
|
||||
|
||||
#### Phase 1 — Grunddaten
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Neues Template erstellen │
|
||||
├─────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Name * │
|
||||
│ [ Weihnachtsmotiv 2025_______________ ]│
|
||||
│ technischer slug wird automatisch │
|
||||
│ │
|
||||
│ Template-Typ * │
|
||||
│ ⦿ full_screen_media │
|
||||
│ ○ message_wall │
|
||||
│ ○ screen_specific_scene │
|
||||
│ │
|
||||
│ Beschreibung │
|
||||
│ [ Weihnachtliche Grafik fuer alle___ ] │
|
||||
│ [ Screens __________________________ ]│
|
||||
│ │
|
||||
│ Zielgruppe / Screens * │
|
||||
│ ⦿ Alle Screens │
|
||||
│ ○ Nach Gruppe auswaehlen │
|
||||
│ [Dropdown: wall-all, single-all, ...] │
|
||||
│ ○ Einzelne Screens auswaehlen │
|
||||
│ [Checkbox-Liste mit Filterung] │
|
||||
│ │
|
||||
│ [Weiter >] [Abbrechen] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Validierung:**
|
||||
|
||||
- Name ist erforderlich
|
||||
- Name ist eindeutig
|
||||
- Template-Typ ist erforderlich
|
||||
- Zielgruppe ist erforderlich (keine leere Zuweisung)
|
||||
|
||||
#### Phase 2 — Szenen/Inhalte
|
||||
|
||||
Fuer `full_screen_media`:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Szenen und Inhalte │
|
||||
├─────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Szene 1: Vollbild-Grafik │
|
||||
│ │
|
||||
│ Medientyp * │
|
||||
│ ○ Bild │
|
||||
│ ○ Video │
|
||||
│ ○ PDF │
|
||||
│ ⦿ Webseite (HTML) │
|
||||
│ │
|
||||
│ Portrait-Asset (Hochformat) │
|
||||
│ [Upload oder URL] │
|
||||
│ [ Datei auswaehlen ] [Neue URL] │
|
||||
│ oder vorher gemanagte Assets: [Liste] │
|
||||
│ │
|
||||
│ Landscape-Asset (Querformat) [optional] │
|
||||
│ [ Datei auswaehlen ] [Neue URL] │
|
||||
│ │
|
||||
│ Anzeigedauer (Sekunden) │
|
||||
│ [60_____] Standard: 10 │
|
||||
│ │
|
||||
│ Load-Timeout (Sekunden) │
|
||||
│ [10_____] Standard: 10 │
|
||||
│ │
|
||||
│ gueltig ab │
|
||||
│ [ 2025-03-25 ] [ 00:00 ] │
|
||||
│ (leer = sofort gueltig) │
|
||||
│ │
|
||||
│ gueltig bis │
|
||||
│ [ 2025-04-01 ] [ 00:00 ] │
|
||||
│ (leer = unendlich) │
|
||||
│ │
|
||||
│ [+ Weitere Szene hinzufuegen] │
|
||||
│ │
|
||||
│ [Zurueck <] [Speichern & Aktivieren] │
|
||||
│ [Speichern] │
|
||||
│ [Abbrechen] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Fuer `message_wall`:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Message-Wall Layout │
|
||||
├─────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Layout-Template │
|
||||
│ [Dropdown: 3x3-Grid, 2x2-Grid, ...] │
|
||||
│ │
|
||||
│ Anzeigedauer (Sekunden) │
|
||||
│ [10_____] │
|
||||
│ │
|
||||
│ Gesamt-Grafik oder Text eingeben │
|
||||
│ [Rich-Text-Editor oder Bild-Upload] │
|
||||
│ │
|
||||
│ Vorschau: [Zeigt Einteilung in Slots] │
|
||||
│ │
|
||||
│ Slot-Zuordnung: [Interaktive Zuordnung] │
|
||||
│ Slot wall-r1-c1 → Screen info01 │
|
||||
│ Slot wall-r1-c2 → Screen info02 │
|
||||
│ ... (9 Slots insgesamt) │
|
||||
│ │
|
||||
│ [+ Layout-Typ aendernx] [Speichern] │
|
||||
│ │
|
||||
│ [Zurueck <] [Speichern & Aktivieren] │
|
||||
│ [Speichern] │
|
||||
│ [Abbrechen] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Fuer `screen_specific_scene`:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Monitorindividuelle Szenen │
|
||||
├─────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Szene 1: Infowand │
|
||||
│ │
|
||||
│ Zielgruppe │
|
||||
│ ⦿ Gruppe: [Dropdown: wall-all] │
|
||||
│ ○ Einzelne Screens: [Checkboxen] │
|
||||
│ │
|
||||
│ Asset │
|
||||
│ [Upload oder URL] │
|
||||
│ │
|
||||
│ Dauer, Timeout, gueltig_von/bis │
|
||||
│ [... wie oben ...] │
|
||||
│ │
|
||||
│ [+ Weitere Szene hinzufuegen] │
|
||||
│ │
|
||||
│ [Zurueck <] [Speichern & Aktivieren] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 2. Kampagnen-Verwaltung
|
||||
|
||||
Kampagnen sind die operativen Instanzen von Templates.
|
||||
|
||||
### Kampagnen-Liste
|
||||
|
||||
**Seite:** Admin → Kampagnen
|
||||
|
||||
**Anzeige:**
|
||||
|
||||
| Name | Template | Aktiv | Zielgruppe | gueltig von | gueltig bis | Betroffene Screens |
|
||||
|---|---|---|---|---|---|---|
|
||||
| Weihnachten Dekoration | Weihnachtsmotiv 2025 | ✓ | alle | 2025-12-01 | 2025-12-26 | 13 Screens |
|
||||
| Schriftzug Januar | Schriftzug Infowand | ✗ | wall-all | 2025-01-06 | 2025-01-31 | 9 Screens |
|
||||
|
||||
**Aktionen:**
|
||||
|
||||
- "Bearbeiten" — Kampagnen-Eigenschaften aendern
|
||||
- "Aktivieren/Deaktivieren" — Toggle sofort
|
||||
- "Vorschau" — zeigt betroffene Screens mit Rendering
|
||||
- "Duplizieugen" — als neue Kampagne mit anderem Template
|
||||
- "Loeschen" — wenn inaktiv und abgelaufen
|
||||
|
||||
### Neue Kampagne starten
|
||||
|
||||
**Workflow Option 1 — Von Template aus:**
|
||||
|
||||
Template-Liste → [Template] → "Aktivieren"
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Kampagne starten: Weihnachtsmotiv 2025 │
|
||||
├─────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Kampagnen-Name │
|
||||
│ [ Weihnachten 2025 einfuehrung____ ] │
|
||||
│ │
|
||||
│ Aktiv ab sofort? │
|
||||
│ ⦿ Ja │
|
||||
│ ○ Geplant fuer: [Datum/Zeit auswaehlen]│
|
||||
│ [ 2025-12-01 ] [ 09:00 ] │
|
||||
│ │
|
||||
│ Gueltig von │
|
||||
│ [ 2025-12-01 ] [ 00:00 ] │
|
||||
│ │
|
||||
│ Gueltig bis │
|
||||
│ [ 2025-12-26 ] [ 23:59 ] │
|
||||
│ │
|
||||
│ Prioritaet (gegenueber Playlist) │
|
||||
│ [1 (hoehere Werte sind wichtiger)] ___ │
|
||||
│ │
|
||||
│ Auto-Deaktivierung bei Ablauf? │
|
||||
│ ⦿ Ja │
|
||||
│ ○ Nein (Kampagne bleibt inaktiv) │
|
||||
│ │
|
||||
│ [Kampagne starten] [Abbrechen] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Workflow Option 2 — Neue Kampagne ohne Template:**
|
||||
|
||||
Admin → Kampagnen → "+ Neue Kampagne"
|
||||
|
||||
```
|
||||
[Template auswaehlen] → [Grunddaten] → [Aktivierung]
|
||||
```
|
||||
|
||||
### Kampagnen-Detailseite
|
||||
|
||||
**Anzeige einer laufenden Kampagne:**
|
||||
|
||||
```
|
||||
Kampagne: Weihnachten 2025 einfuehrung
|
||||
Status: AKTIV seit 2025-12-01 09:00
|
||||
|
||||
Template: Weihnachtsmotiv 2025 (full_screen_media)
|
||||
Zielgruppe: Alle (13 Screens)
|
||||
|
||||
Gueltig: 2025-12-01 00:00 bis 2025-12-26 23:59
|
||||
Prioritaet: 1
|
||||
|
||||
Betroffene Screens:
|
||||
┌──────────────────────────────┐
|
||||
│ info01 online aktiv │ [Screenshot]
|
||||
│ info02 online aktiv │ [Screenshot]
|
||||
│ info03 offline ausstehend │
|
||||
│ info04 online aktiv │ [Screenshot]
|
||||
│ ... (10 weitere) ... │
|
||||
└──────────────────────────────┘
|
||||
|
||||
Aktionen:
|
||||
[Deaktivieren] [Bearbeiten] [Vorschau aendernx]
|
||||
|
||||
Aktivierungsverlauf:
|
||||
2025-12-01 09:00 — Kampagne gestartet von admin@...
|
||||
2025-12-01 09:05 — 9 Screens haben gerendert
|
||||
2025-12-01 10:30 — info03 ging offline, Kampagnen-Inhalt wartet auf Rueckkehr
|
||||
```
|
||||
|
||||
## 3. Verknuepfung zur Prioritaetsregel
|
||||
|
||||
Die Regel `campaign > tenant_playlist > fallback` ist:
|
||||
|
||||
- **hardcoded** im Player
|
||||
- **administrierbar** ueber die Kampagnen-Aktivierung
|
||||
- **vorhersagbar** durch klare Doku
|
||||
|
||||
### Abbildung im System
|
||||
|
||||
```
|
||||
Fuer jeden Screen:
|
||||
IF Kampagne fuer diesen Screen aktiv UND gueltig_von <= jetzt <= gueltig_bis
|
||||
THEN Zeige Kampagnen-Inhalt
|
||||
ELSE IF Tenant-Playlist hat gueltige Items
|
||||
THEN Zeige Tenant-Playlist
|
||||
ELSE
|
||||
Zeige Fallback
|
||||
```
|
||||
|
||||
Diese Logik wird:
|
||||
|
||||
1. **Serverseitig** berechnet bei jedem Sync-Request (HTTP `/api/v1/screens/{screenSlug}/playlist`)
|
||||
2. **Spielerseitig** nochmals geprueft beim Rendering (fuer Offline-Robustheit)
|
||||
|
||||
### Admin-Sichtbarkeit
|
||||
|
||||
Die Admin-UI zeigt auf der Seite "Screens" fuer jeden Monitor:
|
||||
|
||||
```
|
||||
info01
|
||||
├── Kampagne (AKTIV bis 2025-12-26)
|
||||
│ └── Weihnachten 2025 einfuehrung
|
||||
├── Fallback (wird nach Kampagnen-Ablauf gezeigt)
|
||||
└── Tenant Playlist
|
||||
├── Playlist A (Tenant XYZ)
|
||||
│ ├── Bild-1 (gueltig bis 2025-04-01)
|
||||
│ ├── Video-2 (laedt...)
|
||||
│ └── Webseite-3
|
||||
└── Fallback-Verzeichnis
|
||||
```
|
||||
|
||||
Diese View zeigt, was der Screen **aktuell gerade zeigt** und warum.
|
||||
|
||||
## 4. Datenmodell
|
||||
|
||||
### Tabelle `templates`
|
||||
|
||||
```sql
|
||||
CREATE TABLE templates (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
slug TEXT NOT NULL UNIQUE,
|
||||
name TEXT NOT NULL,
|
||||
description TEXT,
|
||||
template_type TEXT NOT NULL CHECK (template_type IN ('message_wall', 'full_screen_media', 'screen_specific_scene')),
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
created_by_user_id TEXT NOT NULL,
|
||||
|
||||
-- Serializierte Konfiguration (JSON)
|
||||
config JSONB NOT NULL DEFAULT '{}'
|
||||
-- Beispiele:
|
||||
-- {
|
||||
-- "target_mode": "all_screens" | "group" | "specific_screens",
|
||||
-- "target_group": "wall-all" (wenn target_mode = "group"),
|
||||
-- "target_screen_ids": ["..."] (wenn target_mode = "specific_screens"),
|
||||
-- "scenes": [
|
||||
-- {
|
||||
-- "media_type": "image|video|pdf|webpage|html",
|
||||
-- "asset_id": "...",
|
||||
-- "portrait_asset_id": "..." (optional),
|
||||
-- "landscape_asset_id": "..." (optional),
|
||||
-- "duration_sec": 10,
|
||||
-- "load_timeout_sec": 10,
|
||||
-- "valid_from": "2025-03-25T00:00:00Z",
|
||||
-- "valid_until": "2025-04-01T23:59:59Z"
|
||||
-- }
|
||||
-- ]
|
||||
-- }
|
||||
);
|
||||
```
|
||||
|
||||
### Tabelle `campaigns`
|
||||
|
||||
```sql
|
||||
CREATE TABLE campaigns (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name TEXT NOT NULL,
|
||||
template_id UUID NOT NULL REFERENCES templates(id),
|
||||
active BOOLEAN NOT NULL DEFAULT false,
|
||||
priority INT NOT NULL DEFAULT 1,
|
||||
valid_from TIMESTAMPTZ NOT NULL,
|
||||
valid_until TIMESTAMPTZ,
|
||||
auto_deactivate BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
created_by_user_id TEXT NOT NULL,
|
||||
|
||||
-- ueberschreiben/erweitern Template-Zielgruppe (optional)
|
||||
target_mode TEXT CHECK (target_mode IN ('template', 'all_screens', 'group', 'specific_screens')),
|
||||
target_group TEXT,
|
||||
target_screen_ids UUID[] DEFAULT '{}'::uuid[]
|
||||
);
|
||||
```
|
||||
|
||||
### Tabelle `campaign_screen_assignments` (generiert)
|
||||
|
||||
Diese Tabelle wird **serverseitig** generiert/gepflegt, wenn eine Kampagne aktiv wird.
|
||||
|
||||
Sie expandiert Gruppen in konkrete Screen-IDs:
|
||||
|
||||
```sql
|
||||
CREATE TABLE campaign_screen_assignments (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
campaign_id UUID NOT NULL REFERENCES campaigns(id) ON DELETE CASCADE,
|
||||
screen_id UUID NOT NULL REFERENCES screens(id) ON DELETE CASCADE,
|
||||
assigned_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(campaign_id, screen_id)
|
||||
);
|
||||
```
|
||||
|
||||
**Logik:**
|
||||
|
||||
```
|
||||
IF campaign.target_mode = 'template'
|
||||
THEN Fuelle campaign_screen_assignments aus template.config.target_screen_ids
|
||||
ELSE IF campaign.target_mode = 'group'
|
||||
THEN Fuelle campaign_screen_assignments aus allen Screens in campaign.target_group
|
||||
ELSE IF campaign.target_mode = 'specific_screens'
|
||||
THEN Fuelle campaign_screen_assignments aus campaign.target_screen_ids
|
||||
ELSE
|
||||
(alle Screens)
|
||||
```
|
||||
|
||||
## 5. Praxis-Beispiele
|
||||
|
||||
### Beispiel 1 — Weihnachtsplakatierung (full_screen_media)
|
||||
|
||||
**Szenario:**
|
||||
|
||||
Admin will ab 01.12.2025 fuer 4 Wochen ein rotes Weihnachtsmotiv auf allen Screens zeigen.
|
||||
|
||||
**Schritte:**
|
||||
|
||||
1. Admin → Templates → "+ Neues Template"
|
||||
- Name: `Weihnachtsmotiv 2025`
|
||||
- Typ: `full_screen_media`
|
||||
- Zielgruppe: `Alle Screens`
|
||||
|
||||
2. Szene hinzufuegen:
|
||||
- Bild hochladen (passend fuer Portrait und Landscape)
|
||||
- Dauer: 10 Sekunden
|
||||
|
||||
3. Speichern → Editor zeigt Draft mit Vorschau
|
||||
|
||||
4. Admin → Templates → [Weihnachtsmotiv 2025] → "Aktivieren"
|
||||
- Kampagnen-Name: `Weihnachten 2025 globale Dekoration`
|
||||
- Gueltig von: 2025-12-01
|
||||
- Gueltig bis: 2025-12-26
|
||||
- Aktiv ab: sofort
|
||||
|
||||
5. Kampagne speichern → Sofort sichtbar auf allen Screens
|
||||
|
||||
### Beispiel 2 — Schriftzug ueber die Infowand (message_wall)
|
||||
|
||||
**Szenario:**
|
||||
|
||||
Admin hat eine neue `message_wall`-Gruppe "wall-all" mit 9 Screens. Er will ein riesiges rotes Schriftzug-Motiv aufteilen und auf allen 9 Screens verteilen.
|
||||
|
||||
**Schritte:**
|
||||
|
||||
1. Admin → Templates → "+ Neues Template"
|
||||
- Name: `Rotes Schriftzug auf Infowand`
|
||||
- Typ: `message_wall`
|
||||
- Zielgruppe: `Gruppe: wall-all`
|
||||
|
||||
2. Layout waehlen: `3x3-Grid` (passt zu 9 Screens)
|
||||
|
||||
3. Gesamte Grafik hochladen (oder als Text eingeben)
|
||||
|
||||
4. Slot-Zuordnung:
|
||||
- System zeigt interaktive 3x3-Vorschau
|
||||
- Admin tuen: "Slot 1 → info01", "Slot 2 → info02", ...
|
||||
- System generiert automatisch die Crop-Regionen
|
||||
|
||||
5. Speichern + Aktivieren
|
||||
- Jeder Screen zeigt seinen Ausschnitt
|
||||
|
||||
### Beispiel 3 — Deaktivierung und Fallback
|
||||
|
||||
**Szenario:**
|
||||
|
||||
Kampagne laueft seit 2 Wochen. Admin will sie sofort stoppen, damit Screens auf ihre normalen Playlists zurueckfallen.
|
||||
|
||||
**Aktion:**
|
||||
|
||||
Admin → Kampagnen → [Kampagne] → "Deaktivieren"
|
||||
|
||||
**Folge:**
|
||||
|
||||
- Server setzt `campaigns.active = false`
|
||||
- Bei naechstem Sync ladet jeder Player wieder die Tenant-Playlist
|
||||
- Fallback-Verzeichnis wird nur noch angezeigt, wenn tenantbezogene Playlist leer ist
|
||||
|
||||
## 6. Zusammenfassung
|
||||
|
||||
Der Template-Editor:
|
||||
|
||||
- **ist zwei-stufig** — Template-Verwaltung + Kampagnen-Aktivierung
|
||||
- **ist intuitiv** — Multi-Step-Formulare mit Vorschauen
|
||||
- **unterstützt alle Template-Typen** — full_screen, message_wall, screen_specific
|
||||
- **haelt die Prioritaetsregel transparent** — Admin sieht, welche Kampagne welche Screens uebersteuert
|
||||
- **ist zukunftssicher** — Datenmodell skaliert mit neuen Template-Typen
|
||||
|
|
@ -119,45 +119,45 @@ Logout implementieren, alle Routen eintragen.
|
|||
Ziel: Drei Middleware-Funktionen implementieren, Router umbauen sodass geschuetzte Routen
|
||||
hinter den Middlewares liegen, hardcoded `"morz"` an allen vier Stellen entfernen.
|
||||
|
||||
- [x] **RequireAuth implementieren** – in `server/backend/internal/httpapi/middleware.go`
|
||||
- [ ] **RequireAuth implementieren** – in `server/backend/internal/httpapi/middleware.go`
|
||||
(neue Datei) Funktion `RequireAuth(authStore *store.AuthStore) func(http.Handler) http.Handler`;
|
||||
liest Cookie `morz_session`, ruft `authStore.GetSessionUser` auf,
|
||||
speichert `*store.User` im Context (eigener Key-Typ `contextKey`),
|
||||
redirectet bei Fehler zu `/login?next=<aktueller-Pfad>`.
|
||||
|
||||
- [x] **RequireAdmin implementieren** – in `middleware.go` Funktion
|
||||
- [ ] **RequireAdmin implementieren** – in `middleware.go` Funktion
|
||||
`RequireAdmin(next http.Handler) http.Handler`; liest User aus Context,
|
||||
prueft `user.Role == "admin"`, antwortet sonst mit 403.
|
||||
|
||||
- [x] **RequireTenantAccess implementieren** – in `middleware.go` Funktion
|
||||
- [ ] **RequireTenantAccess implementieren** – in `middleware.go` Funktion
|
||||
`RequireTenantAccess(next http.Handler) http.Handler`; liest User und `{tenantSlug}` aus
|
||||
Request-Path, erlaubt Zugriff wenn `user.Role == "admin"` oder `user.TenantSlug == tenantSlug`
|
||||
(dazu Feld `TenantSlug string` auf `store.User` erganzen, per JOIN in `GetSessionUser` befullen),
|
||||
antwortet sonst mit 403.
|
||||
|
||||
- [x] **Router umbauen** – in `router.go` die bisherige flache Route-Liste in Gruppen
|
||||
- [ ] **Router umbauen** – in `router.go` die bisherige flache Route-Liste in Gruppen
|
||||
umstrukturieren: `/admin`-Routen hinter `RequireAuth` + `RequireAdmin` legen,
|
||||
`/manage/{screenSlug}`-Routen und kuenftige `/tenant/{tenantSlug}/...`-Routen hinter
|
||||
`RequireAuth` + `RequireTenantAccess` legen; Hilfsfunktion `chain(...Middleware)` nutzen
|
||||
oder inline wrappen.
|
||||
|
||||
- [x] **Hardcoded "morz" entfernen (Stelle 1)** – in
|
||||
- [ ] **Hardcoded "morz" entfernen (Stelle 1)** – in
|
||||
`server/backend/internal/httpapi/manage/ui.go` Zeile 93:
|
||||
`tenants.Get(r.Context(), "morz")` ersetzen durch Auslesen des authentifizierten Users aus
|
||||
Context; `tenant_id` aus `user.TenantID` verwenden.
|
||||
|
||||
- [x] **Hardcoded "morz" entfernen (Stelle 2)** – in `ui.go` Zeile 154:
|
||||
- [ ] **Hardcoded "morz" entfernen (Stelle 2)** – in `ui.go` Zeile 154:
|
||||
gleiche Ersetzung fuer `HandleManageUI`.
|
||||
|
||||
- [x] **Hardcoded "morz" entfernen (Stelle 3)** – in `ui.go` Zeile 197:
|
||||
- [ ] **Hardcoded "morz" entfernen (Stelle 3)** – in `ui.go` Zeile 197:
|
||||
gleiche Ersetzung fuer `HandleProvisionUI`; SSH-User `"morz"` (Zeile 191) aus Config
|
||||
lesen oder als optionalen Query-Parameter ermoeglichen.
|
||||
|
||||
- [x] **Hardcoded "morz" entfernen (Stelle 4)** – in
|
||||
- [ ] **Hardcoded "morz" entfernen (Stelle 4)** – in
|
||||
`server/backend/internal/httpapi/manage/register.go` Zeile 43:
|
||||
`tenants.Get(r.Context(), "morz")` durch `cfg.DefaultTenantSlug` ersetzen.
|
||||
|
||||
- [x] **Doku** – `docs/SERVER-KONZEPT.md` um Abschnitt "Middleware-Kette" erganzen:
|
||||
- [ ] **Doku** – `docs/SERVER-KONZEPT.md` um Abschnitt "Middleware-Kette" erganzen:
|
||||
Schaubild der Route-Gruppen mit den jeweiligen Middlewares.
|
||||
|
||||
---
|
||||
|
|
@ -167,52 +167,52 @@ hinter den Middlewares liegen, hardcoded `"morz"` an allen vier Stellen entferne
|
|||
Ziel: Eigenes Package fuer Tenant-Handler, zweistufige Tab-Ansicht
|
||||
(Screens mit Live-Status, Mediathek mit Upload), Navbar, Routing.
|
||||
|
||||
- [x] **Package-Verzeichnis anlegen** – neues Verzeichnis
|
||||
- [ ] **Package-Verzeichnis anlegen** – neues Verzeichnis
|
||||
`server/backend/internal/httpapi/tenant/`; Dateien:
|
||||
`tenant.go` (Handler), `templates.go` (Template-Strings); gleiche Struktur wie Package `manage`.
|
||||
|
||||
- [x] **tenantDashTmpl definieren** – in `tenant/templates.go` Bulma-Layout mit:
|
||||
- [ ] **tenantDashTmpl definieren** – in `tenant/templates.go` Bulma-Layout mit:
|
||||
Navbar (Logo links, "Abmelden"-Button rechts als POST /logout),
|
||||
zwei Tabs (`<div class="tabs">`) mit IDs `tab-screens` und `tab-media`,
|
||||
Tab A "Meine Monitore", Tab B "Mediathek"; JS-Snippet fuer Tab-Switching inline am Ende
|
||||
des Templates (analog zu bestehenden inline-Scripts in `manage/templates.go`).
|
||||
|
||||
- [x] **Tab A – Screen-Karten implementieren** – in `tenantDashTmpl` Tab A mit Bulma-Cards
|
||||
- [ ] **Tab A – Screen-Karten implementieren** – in `tenantDashTmpl` Tab A mit Bulma-Cards
|
||||
pro Screen: Titel (Screen.Name), Orientierungsicon, Status-Badge
|
||||
(Online/Offline/Unbekannt) per JS-Fetch aus `/api/v1/screens/status`;
|
||||
JS-Funktion `loadScreenStatuses()` alle 30 Sekunden aufrufen und Badge-Farbe setzen
|
||||
(is-success / is-danger / is-warning).
|
||||
|
||||
- [x] **Tab B – Mediathek mit Upload implementieren** – in `tenantDashTmpl` Tab B:
|
||||
- [ ] **Tab B – Mediathek mit Upload implementieren** – in `tenantDashTmpl` Tab B:
|
||||
Upload-Formular (multipart, POST `/tenant/{tenantSlug}/upload`), Dateiliste als Bulma-Table
|
||||
(Titel, Typ, Groesse, Datum, Loeschen-Button mit Modal-Confirmation analog zu `manage/templates.go`);
|
||||
Upload-Fortschrittsbalken (bestehende JS-Logik aus `manageTmpl` wiederverwenden oder extrahieren).
|
||||
|
||||
- [x] **HandleTenantDashboard implementieren** – in `tenant/tenant.go` Funktion
|
||||
- [ ] **HandleTenantDashboard implementieren** – in `tenant/tenant.go` Funktion
|
||||
`HandleTenantDashboard(tenantStore *store.TenantStore, screenStore *store.ScreenStore,
|
||||
mediaStore *store.MediaStore, statusStore playerStatusStore) http.HandlerFunc`;
|
||||
liest `{tenantSlug}` aus URL, laedt Screens und Media-Assets, rendert `tenantDashTmpl`.
|
||||
|
||||
- [x] **HandleTenantUpload implementieren** – in `tenant/tenant.go` Funktion
|
||||
- [ ] **HandleTenantUpload implementieren** – in `tenant/tenant.go` Funktion
|
||||
`HandleTenantUpload(tenantStore *store.TenantStore, mediaStore *store.MediaStore,
|
||||
uploadDir string) http.HandlerFunc`; identische Upload-Logik wie `manage.HandleUploadMediaUI`,
|
||||
aber ohne Screen-Kontext (Media gehoert direkt dem Tenant);
|
||||
nach Erfolg Redirect zu `/tenant/{tenantSlug}/dashboard?tab=media&flash=uploaded`.
|
||||
|
||||
- [x] **Navbar in Admin-UI erganzen** – in `manage/templates.go` in `adminTmpl` und
|
||||
- [ ] **Navbar in Admin-UI erganzen** – in `manage/templates.go` in `adminTmpl` und
|
||||
`manageTmpl` eine minimale Bulma-Navbar mit "Admin" (aktiv) und "Abmelden"-Button erganzen,
|
||||
sodass beide UIs optisch konsistent sind.
|
||||
|
||||
- [x] **Responsive pruefen** – `tenantDashTmpl` auf `is-mobile`-Breakpoint testen:
|
||||
- [ ] **Responsive pruefen** – `tenantDashTmpl` auf `is-mobile`-Breakpoint testen:
|
||||
Screen-Karten sollen in `columns is-multiline` wrappen; Upload-Bereich soll auf schmalen
|
||||
Screens nutzbar bleiben.
|
||||
|
||||
- [x] **Routen eintragen** – in `router.go` innerhalb `registerManageRoutes` hinter
|
||||
- [ ] **Routen eintragen** – in `router.go` innerhalb `registerManageRoutes` hinter
|
||||
`RequireAuth` + `RequireTenantAccess`:
|
||||
`mux.HandleFunc("GET /tenant/{tenantSlug}/dashboard", tenant.HandleTenantDashboard(...))`,
|
||||
`mux.HandleFunc("POST /tenant/{tenantSlug}/upload", tenant.HandleTenantUpload(...))`.
|
||||
|
||||
- [x] **Doku** – `docs/SERVER-KONZEPT.md` neuen Abschnitt "Tenant-Dashboard" mit
|
||||
- [ ] **Doku** – `docs/SERVER-KONZEPT.md` neuen Abschnitt "Tenant-Dashboard" mit
|
||||
URL-Schema, Tab-Beschreibung und Status-Polling-Intervall erganzen.
|
||||
|
||||
---
|
||||
|
|
@ -223,28 +223,28 @@ Ziel: Der "Zurueck"-Link in der Manage-UI soll kontextsensitiv sein –
|
|||
aus dem Admin-Bereich kommend zeigt er zur Admin-Uebersicht,
|
||||
aus dem Tenant-Dashboard kommend zurueck zum Dashboard.
|
||||
|
||||
- [x] **TemplateData um BackLink/BackLabel erweitern** – in `manage/ui.go`
|
||||
- [ ] **TemplateData um BackLink/BackLabel erweitern** – in `manage/ui.go`
|
||||
Struct `manageData` (oder gleichwertiges anonymes Struct) um Felder
|
||||
`BackLink string` und `BackLabel string` erganzen.
|
||||
|
||||
- [x] **HandleManageUI: BackLink aus Query-Parameter lesen** – in `HandleManageUI`:
|
||||
- [ ] **HandleManageUI: BackLink aus Query-Parameter lesen** – in `HandleManageUI`:
|
||||
wenn `r.URL.Query().Get("from") == "tenant"`, dann
|
||||
`BackLink = "/tenant/{tenantSlug}/dashboard"` und `BackLabel = "← Dashboard"`;
|
||||
sonst `BackLink = "/admin"` und `BackLabel = "← Admin"`.
|
||||
|
||||
- [x] **manageTmpl: statisches "← Admin" ersetzen** – in `manage/templates.go`
|
||||
- [ ] **manageTmpl: statisches "← Admin" ersetzen** – in `manage/templates.go`
|
||||
den hardcoded Link `← Admin` durch `{{.BackLabel}}` mit `href="{{.BackLink}}"` ersetzen.
|
||||
|
||||
- [x] **Tenant-Dashboard: Links zu Manage-UI mit ?from=tenant** – in `tenant/templates.go`
|
||||
- [ ] **Tenant-Dashboard: Links zu Manage-UI mit ?from=tenant** – in `tenant/templates.go`
|
||||
jeden "Playlist bearbeiten"-Link als `/manage/{screenSlug}?from=tenant` formulieren,
|
||||
damit der Ruecklink korrekt gesetzt wird.
|
||||
|
||||
- [x] **Breadcrumb-Navigation** – optional, aber empfohlen: in `manageTmpl` oberhalb des
|
||||
- [ ] **Breadcrumb-Navigation** – optional, aber empfohlen: in `manageTmpl` oberhalb des
|
||||
Hauptinhalts eine Bulma-Breadcrumb-Leiste einfuegen:
|
||||
Admin-Pfad: `Admin > {ScreenName}`, Tenant-Pfad: `Dashboard > {ScreenName}`;
|
||||
Daten aus `BackLabel`/`BackLink` und `Screen.Name` zusammensetzen.
|
||||
|
||||
- [x] **Doku** – Kommentar in `manage/ui.go` bei `HandleManageUI` dokumentiert
|
||||
- [ ] **Doku** – Kommentar in `manage/ui.go` bei `HandleManageUI` dokumentiert
|
||||
den `?from=tenant`-Parameter und das BackLink-Verhalten.
|
||||
|
||||
---
|
||||
|
|
@ -254,7 +254,7 @@ aus dem Tenant-Dashboard kommend zurueck zum Dashboard.
|
|||
Ziel: Session-Cleanup als Hintergrundprozess, Secrets in Docker/Ansible,
|
||||
Code-Review durch Larry, End-to-End-Test, Deployment, Nachziehen der Kerndokumentation.
|
||||
|
||||
- [x] **Session-Cleanup-Ticker implementieren** – in `app.go` nach Server-Start einen
|
||||
- [ ] **Session-Cleanup-Ticker implementieren** – in `app.go` nach Server-Start einen
|
||||
`time.NewTicker(1 * time.Hour)` starten (als Goroutine), der `authStore.CleanExpiredSessions`
|
||||
aufruft; Ticker beim Shutdown stoppen (Context-Abbruch oder `defer ticker.Stop()`).
|
||||
|
||||
|
|
@ -281,12 +281,12 @@ Code-Review durch Larry, End-to-End-Test, Deployment, Nachziehen der Kerndokumen
|
|||
`docker compose pull && docker compose up -d` auf dem Server ausfuehren,
|
||||
Migration 002_auth.sql wird automatisch eingespielt, Logs auf Fehler pruefen.
|
||||
|
||||
- [x] **TODO.md nachziehen** – abgearbeitete Punkte in `TODO.md` abhaken:
|
||||
- [ ] **TODO.md nachziehen** – abgearbeitete Punkte in `TODO.md` abhaken:
|
||||
"Firmen-/Monitor-Oberflaeche in Hauptbereiche aufteilen" (Phase 4),
|
||||
"Authentifizierungskonzept festlegen" (falls noch offen),
|
||||
"Mandantentrennung in den APIs absichern" (falls noch offen).
|
||||
|
||||
- [x] **README / DEVELOPMENT nachziehen** – `DEVELOPMENT.md` um Abschnitt
|
||||
- [ ] **README / DEVELOPMENT nachziehen** – `DEVELOPMENT.md` um Abschnitt
|
||||
"Lokale Entwicklung mit Login" erganzen: Env-Variable `MORZ_INFOBOARD_ADMIN_PASSWORD=dev`
|
||||
und `MORZ_INFOBOARD_DEV_MODE=true` setzen, um ohne HTTPS-Cookie arbeiten zu koennen.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,305 +0,0 @@
|
|||
# Info-Board Neu - Watchdog-Konzept
|
||||
|
||||
## Ziel
|
||||
|
||||
Der Watchdog ueberwacht die kritischen Komponenten des Players und sorgt dafuer, dass der Display-Betrieb bei Abstuerzen oder Verhaengungen automatisch wiederhergestellt wird.
|
||||
|
||||
Die Ueberwachung erfolgt auf zwei Ebenen:
|
||||
|
||||
1. **Browser-Watchdog** — Ueberwachung von Chromium
|
||||
2. **Agent-Watchdog** — Ueberwachung des Player-Agents
|
||||
|
||||
## Grundprinzipien
|
||||
|
||||
- Watchdogs sind extern und unabhaengig von den ueberwachten Prozessen
|
||||
- Erkennung erfolgt aktiv durch Health-Checks, nicht durch Liveness-Pings
|
||||
- Restart-Strategien sind progressiv und vermeiden Restart-Schleifen
|
||||
- Logging ist strukturiert und fuer Admin-Diagnosen aussagekraeftig
|
||||
|
||||
## Browser-Watchdog (Chromium-Ueberwachung)
|
||||
|
||||
### Aufgaben
|
||||
|
||||
Der Browser-Watchdog sorgt dafuer, dass:
|
||||
|
||||
- Chromium staendig laeuft und antwortet
|
||||
- der Renderer nicht in einer Endlosschleife haengt
|
||||
- Rendering-Fehler nicht zu permanenten Schwarzbildern fuehren
|
||||
- bei Chromium-Crash oder Verhaengung schnell neugestartet wird
|
||||
|
||||
### Health-Check-Verfahren
|
||||
|
||||
Der Watchdog fuehrt regelmaeßig folgende Checks durch:
|
||||
|
||||
#### 1. Prozess-Check
|
||||
|
||||
```
|
||||
Existiert der Chromium-Prozess noch?
|
||||
- lsof oder ps-Abfrage auf die PID
|
||||
- Timeout: sofort bei fehlender PID
|
||||
```
|
||||
|
||||
#### 2. HTTP-Health-Check auf localhost
|
||||
|
||||
```
|
||||
GET http://localhost:8081/health
|
||||
Timeout: 5 Sekunden
|
||||
Erwartet: 200 OK und JSON-Antwort {status: "ok"}
|
||||
```
|
||||
|
||||
Die `player-ui` muss einen einfachen `/health`-Endpunkt bereitstellen, der schnell antwortet, auch wenn die Playlist gerade verarbeitet wird.
|
||||
|
||||
#### 3. Rendering-Verifizierung (optional, Phase 2)
|
||||
|
||||
```
|
||||
Screenshot-basiert erkennen, ob der Browser:
|
||||
- Fehlerseite zeigt
|
||||
- komplett schwarz ist (mehr als 95% schwarze Pixel)
|
||||
- seit mehreren Minuten denselben Content zeigt, obwohl ein Wechsel erwartet wurde
|
||||
```
|
||||
|
||||
Diese Methode ist fuer v1 optional, wird aber fuer spaetere Verhaengungserkennung eingeplant.
|
||||
|
||||
### Ueberwachungs-Intervall
|
||||
|
||||
- Health-Check alle **30 Sekunden**
|
||||
- Bei Fehler: sofort Neustart pruefen (kein Warten auf naechsten Zyklus)
|
||||
|
||||
### Restart-Strategie
|
||||
|
||||
#### Strategie: Exponentieller Backoff mit Maximum
|
||||
|
||||
```
|
||||
Fehlerfall:
|
||||
Fehler 1: Sofort neustart (Wait 0s)
|
||||
Fehler 2: Warte 2s, versuche Restart
|
||||
Fehler 3: Warte 5s, versuche Restart
|
||||
Fehler 4: Warte 10s, versuche Restart
|
||||
Fehler 5+: Warte 30s, versuche Restart
|
||||
Nach 10 aufeinanderfolgende Fehler ohne erfolgreicher Recovery:
|
||||
- Alert an Admin (via Server-Status)
|
||||
- Overlay auf "Error" setzen
|
||||
- Watchdog-Loop verlangsamen auf 5 Min Intervall
|
||||
```
|
||||
|
||||
#### Erfolg-Kriterium
|
||||
|
||||
Wenn der Health-Check 3x hintereinander erfolgreich ist:
|
||||
|
||||
- Backoff-Zaehler zuruecksetzen auf 0
|
||||
- naechstes Fehler wieder mit sofort-Restart starten
|
||||
|
||||
### Logging
|
||||
|
||||
Jeder Watchdog-Ereignis wird protokolliert:
|
||||
|
||||
```json
|
||||
{
|
||||
"ts": "2025-03-23T14:22:15Z",
|
||||
"component": "browser_watchdog",
|
||||
"event": "restart",
|
||||
"reason": "health_check_timeout",
|
||||
"attempt": 2,
|
||||
"next_retry_in_ms": 5000,
|
||||
"details": {
|
||||
"pid_before": 1234,
|
||||
"pid_after": 1245,
|
||||
"http_status_before": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Logging-Ziele:
|
||||
|
||||
- strukturiert auf stdout/stderr (JSON)
|
||||
- lokal in `/var/log/signage/watchdog.log` mit Rotation
|
||||
|
||||
## Agent-Watchdog (systemd-Integration)
|
||||
|
||||
### Aufgaben
|
||||
|
||||
Der Agent-Watchdog (bzw. systemd-Unit) sorgt dafuer, dass:
|
||||
|
||||
- der Player-Agent staendig laeuft
|
||||
- nach Crash oder gewolltem Stop schnell neugestartet wird
|
||||
- Restart-Grenzen ein Verhaengungsloop verhindern
|
||||
|
||||
### systemd-Konfiguration
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/local/bin/player-agent
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
StartLimitInterval=300
|
||||
StartLimitBurst=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
```
|
||||
|
||||
**Bedeutung:**
|
||||
|
||||
- `Restart=always` — Neustart bei jedem Exit (unabhaengig vom Exit-Code)
|
||||
- `RestartSec=5` — Warte 5 Sekunden vor Neustart
|
||||
- `StartLimitInterval=300` — Zaehle Restarts in einem 300s-Fenster
|
||||
- `StartLimitBurst=10` — Mehr als 10 Restarts in 300s fuehrt zu systemd-Stop
|
||||
|
||||
Wenn `StartLimitBurst` erreicht wird:
|
||||
|
||||
- systemd laesst den Service stehen
|
||||
- Admin wird informiert (Status-API setzt `agent_watchdog_failed`)
|
||||
- manueller Eingriff oder Admin-Kommando noetig
|
||||
|
||||
### Health-Check durch Agent selbst
|
||||
|
||||
Der Agent sollte intern:
|
||||
|
||||
- Broker-Verbindung regelmaeßig pruefen
|
||||
- Server-Sync-Status tracken
|
||||
- bei kritischen Innenfehlern nicht einfach weiterlaeufen
|
||||
|
||||
Wenn sich der Agent selbst als unheilbar beschaedigt sieht:
|
||||
|
||||
- strukturiert mit Exit-Code `1` beenden (systemd startet neu)
|
||||
- nicht mit `exit(0)` haengend beenden
|
||||
|
||||
## Verhaeltnis zu systemd
|
||||
|
||||
### Architektur-Entscheidung
|
||||
|
||||
`systemd` uebernimmt die Prozess-Wiederbelebung fuer den Agent.
|
||||
|
||||
Der Browser-Watchdog ist ein **separater, von systemd unabhaengiger Prozess**, weil:
|
||||
|
||||
- Chromium staendiger Ueberwachung bedarf (Health-Checks im 30s-Rhythmus)
|
||||
- ein Systemd-Watchdog-Timer zu unverzeihlich waere (nur on/off, nicht granular)
|
||||
- der Browser-Watchdog auch die Systemd-Unit selbst monitoren kann (Defensive Architektur)
|
||||
|
||||
### Optional: systemd WatchdogSec
|
||||
|
||||
Fuer den Agent ist es sinnvoll, auch systemd's Watchdog-Timer zu nutzen:
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
WatchdogSec=30
|
||||
ExecStart=/usr/local/bin/player-agent
|
||||
```
|
||||
|
||||
Der Agent muesste dann periodisch `systemd-notify --ready` senden.
|
||||
|
||||
Das ist **optional fuer v1**, wird aber fuer spaetere Robustheit eingeplant.
|
||||
|
||||
## Integration mit Player-Setup
|
||||
|
||||
### Verzeichnisstruktur
|
||||
|
||||
```
|
||||
/usr/local/bin/
|
||||
player-agent — Go-Binary
|
||||
browser-watchdog — Go-Binary oder Shell-Script
|
||||
|
||||
/etc/systemd/system/
|
||||
signage-agent.service
|
||||
signage-browser-watchdog.service
|
||||
|
||||
/var/lib/signage/
|
||||
watchdog-state.json — letzter Zustand, Backoff-Counter
|
||||
|
||||
/var/log/signage/
|
||||
watchdog.log — strukturiertes Logging
|
||||
```
|
||||
|
||||
### Startup-Reihenfolge
|
||||
|
||||
1. Basis-System bootet, X11 startet
|
||||
2. `signage-agent.service` startet (systemd)
|
||||
3. Agent startet, prueft Konfiguration, startet `player-ui` HTTP-Server
|
||||
4. `signage-browser-watchdog.service` startet (systemd)
|
||||
5. Watchdog wartet initial 10s, bevor erste Checks starten
|
||||
6. Agent laesst Chromium starten
|
||||
7. Watchdog beginnt Health-Checks
|
||||
|
||||
Dieses Ordering verhindert, dass der Watchdog versucht, den Browser zu uberwachen, bevor der Agent bereit ist.
|
||||
|
||||
### Stopp-Reihenfolge bei Shutdown
|
||||
|
||||
1. systemd sendet SIGTERM an Agent und Browser-Watchdog
|
||||
2. Watchdog: beendet sich, versucht nicht zu restarten
|
||||
3. Agent: beendet sich, laedt Chromium herunter
|
||||
4. Systemd wartet auf Completion
|
||||
|
||||
## Fehlerklassifizierung und Admin-Reporting
|
||||
|
||||
### Fehlerklassen
|
||||
|
||||
| Fehlerklasse | Symptom | Watchdog-Aktion | Admin-Alert |
|
||||
|---|---|---|---|
|
||||
| Prozess-Crash | PID weg | Sofort neustart | Nach 3x Fehlschlag |
|
||||
| Health-Check-Timeout | HTTP timeout | Backoff-Restart | Nach 5x Fehlschlag |
|
||||
| Rendering-Fehler | Browser zeigt Fehlerseite | Neustart | Sofort sichtbar |
|
||||
| Backoff-Maximum | 10+ Fehler in 5min | Stoppen, Alert | Sofort |
|
||||
| Agent-Unhealthy | Server-Sync fehlgeschlagen | Systemd-Neustart | Nach 3x Sync-Fehler |
|
||||
|
||||
### Admin-Oberflaeche
|
||||
|
||||
Status-Page und Admin-Dashboard zeigen:
|
||||
|
||||
```json
|
||||
{
|
||||
"screen_id": "info01",
|
||||
"browser_status": {
|
||||
"pid": 1234,
|
||||
"health": "ok",
|
||||
"last_check_at": "2025-03-23T14:25:00Z",
|
||||
"restart_count_5m": 0,
|
||||
"last_error": null
|
||||
},
|
||||
"agent_status": {
|
||||
"pid": 567,
|
||||
"uptime_seconds": 3600,
|
||||
"sync_status": "ok",
|
||||
"last_sync_at": "2025-03-23T14:24:55Z",
|
||||
"systemd_restart_count": 0
|
||||
},
|
||||
"watchdog_alert": null
|
||||
}
|
||||
```
|
||||
|
||||
## Konfigurierbare Parameter
|
||||
|
||||
In `/etc/signage/config.yml` oder Umgebungsvariablen:
|
||||
|
||||
```yaml
|
||||
watchdog:
|
||||
browser:
|
||||
check_interval_sec: 30
|
||||
health_check_timeout_sec: 5
|
||||
restart_backoff_steps: [0, 2, 5, 10, 30] # Sekunden
|
||||
max_consecutive_errors: 10
|
||||
error_window_sec: 300
|
||||
agent:
|
||||
systemd_unit: "signage-agent.service"
|
||||
healthcheck_timeout_sec: 10
|
||||
```
|
||||
|
||||
## Testing und Validierung
|
||||
|
||||
Testfaelle fuer den Watchdog:
|
||||
|
||||
1. Chromium manuell toeten (`kill -9 PID`) — sollte innerhalb 30s neustartet werden
|
||||
2. Player-Agent starten/stoppen — systemd sollte neustart triggern
|
||||
3. Player-UI HTTP-Server abschalten — Browser-Watchdog sollte neustarten
|
||||
4. Schnelle aufeinanderfolgende Crashes — Backoff-Exponentialfunktion pruefen
|
||||
5. Admin-Kommando `restart_player` — geordneter Neustart, dann Restart-Counter nicht erhoeht
|
||||
6. Watchdog-Logs auf Struktur und Vollstaendigkeit pruefen
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Der Watchdog-Ansatz ist:
|
||||
|
||||
- **Transparent** — klare Logging und Admin-Sichtbarkeit
|
||||
- **Progressive** — Backoff statt Restart-Schleife
|
||||
- **Defensiv** — mehrere Erkennungsmethoden (Prozess, HTTP, optional Rendering)
|
||||
- **Integriert** — arbeitet mit systemd zusammen, nicht gegen es
|
||||
- **Skalierbar** — Verfahren gilt fuer alle Player unabhaengig von Standort oder Netzwerk
|
||||
|
|
@ -16,7 +16,6 @@ import (
|
|||
"git.az-it.net/az/morz-infoboard/player/agent/internal/mqttheartbeat"
|
||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/mqttsubscriber"
|
||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/playerserver"
|
||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/screenshot"
|
||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/statusreporter"
|
||||
)
|
||||
|
||||
|
|
@ -223,14 +222,6 @@ func (a *App) Run(ctx context.Context) error {
|
|||
// Start polling the backend for playlist updates (60 s fallback + MQTT trigger).
|
||||
go a.pollPlaylist(ctx)
|
||||
|
||||
// Phase 6: Periodische Screenshot-Erzeugung, wenn konfiguriert.
|
||||
if a.Config.ScreenshotEvery > 0 {
|
||||
ss := screenshot.New(a.Config.ScreenID, a.Config.ServerBaseURL, a.Config.ScreenshotEvery, a.logger)
|
||||
go ss.Run(ctx)
|
||||
a.logger.Printf("event=screenshot_enabled screen_id=%s interval_seconds=%d",
|
||||
a.Config.ScreenID, a.Config.ScreenshotEvery)
|
||||
}
|
||||
|
||||
a.emitHeartbeat()
|
||||
a.mu.Lock()
|
||||
a.status = StatusRunning
|
||||
|
|
@ -281,10 +272,6 @@ func (a *App) registerScreen(ctx context.Context) {
|
|||
return
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
// K6: Register-Secret mitsenden, wenn konfiguriert.
|
||||
if a.Config.RegisterSecret != "" {
|
||||
req.Header.Set("X-Register-Secret", a.Config.RegisterSecret)
|
||||
}
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err == nil {
|
||||
|
|
|
|||
|
|
@ -23,13 +23,6 @@ type Config struct {
|
|||
PlayerListenAddr string `json:"player_listen_addr"`
|
||||
// PlayerContentURL is a fallback URL shown when no playlist is available from the server.
|
||||
PlayerContentURL string `json:"player_content_url"`
|
||||
// RegisterSecret ist das Pre-Shared-Secret für POST /api/v1/screens/register (K6).
|
||||
// Muss mit MORZ_INFOBOARD_REGISTER_SECRET auf dem Server übereinstimmen.
|
||||
// Wenn leer, wird kein Header gesendet (kompatibel mit Servern ohne Secret).
|
||||
RegisterSecret string `json:"register_secret"`
|
||||
// ScreenshotEvery gibt das Intervall in Sekunden für periodische Screenshots an (Phase 6).
|
||||
// 0 oder negativ = Screenshots deaktiviert.
|
||||
ScreenshotEvery int `json:"screenshot_every_seconds"`
|
||||
}
|
||||
|
||||
const defaultConfigPath = "/etc/signage/config.json"
|
||||
|
|
@ -97,12 +90,6 @@ func overrideFromEnv(cfg *Config) {
|
|||
cfg.ScreenName = getenv("MORZ_INFOBOARD_SCREEN_NAME", cfg.ScreenName)
|
||||
cfg.ScreenOrientation = getenv("MORZ_INFOBOARD_SCREEN_ORIENTATION", cfg.ScreenOrientation)
|
||||
cfg.PlayerContentURL = getenv("MORZ_INFOBOARD_PLAYER_CONTENT_URL", cfg.PlayerContentURL)
|
||||
cfg.RegisterSecret = getenv("MORZ_INFOBOARD_REGISTER_SECRET", cfg.RegisterSecret)
|
||||
if value := getenv("MORZ_INFOBOARD_SCREENSHOT_EVERY", ""); value != "" {
|
||||
var parsed int
|
||||
_, _ = fmt.Sscanf(value, "%d", &parsed)
|
||||
cfg.ScreenshotEvery = parsed
|
||||
}
|
||||
if value := getenv("MORZ_INFOBOARD_STATUS_REPORT_EVERY", ""); value != "" {
|
||||
var parsed int
|
||||
_, _ = fmt.Sscanf(value, "%d", &parsed)
|
||||
|
|
|
|||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
|
@ -208,13 +208,6 @@ const playerHTML = `<!DOCTYPE html>
|
|||
opacity: 0;
|
||||
transition: opacity 0.5s ease;
|
||||
}
|
||||
|
||||
/* PDF.js Canvas */
|
||||
#pdf-canvas {
|
||||
position: fixed; inset: 0;
|
||||
width: 100%; height: 100%;
|
||||
display: none; background: #000; z-index: 10;
|
||||
}
|
||||
#img-view {
|
||||
object-fit: contain;
|
||||
background: #000;
|
||||
|
|
@ -259,34 +252,22 @@ const playerHTML = `<!DOCTYPE html>
|
|||
<iframe id="frame" allow="autoplay; fullscreen" allowfullscreen></iframe>
|
||||
<img id="img-view" alt="">
|
||||
<video id="video-view" autoplay muted playsinline></video>
|
||||
<canvas id="pdf-canvas"></canvas>
|
||||
<div id="frame-error">
|
||||
<span class="error-title" id="frame-error-title"></span>
|
||||
<span class="error-hint">Seite kann nicht eingebettet werden</span>
|
||||
</div>
|
||||
<div id="dot"></div>
|
||||
|
||||
<script src="/assets/pdf.min.js"></script>
|
||||
<script>
|
||||
var splash = document.getElementById('splash');
|
||||
var overlay = document.getElementById('info-overlay');
|
||||
var frame = document.getElementById('frame');
|
||||
var imgView = document.getElementById('img-view');
|
||||
var videoView = document.getElementById('video-view');
|
||||
var pdfCanvas = document.getElementById('pdf-canvas');
|
||||
var frameError = document.getElementById('frame-error');
|
||||
var frameErrorTitle = document.getElementById('frame-error-title');
|
||||
var dot = document.getElementById('dot');
|
||||
|
||||
// PDF.js Worker konfigurieren
|
||||
if (typeof pdfjsLib !== 'undefined') {
|
||||
pdfjsLib.GlobalWorkerOptions.workerSrc = '/assets/pdf.worker.min.js';
|
||||
}
|
||||
|
||||
// Aktuell laufende PDF-Render-Session; wird genutzt um veraltete Sessions
|
||||
// abzubrechen wenn hideAllContent() aufgerufen wird.
|
||||
var pdfSession = null;
|
||||
|
||||
// ── Splash-Orientierung ───────────────────────────────────────────
|
||||
function updateSplash() {
|
||||
var portrait = window.innerHeight > window.innerWidth;
|
||||
|
|
@ -368,10 +349,6 @@ const playerHTML = `<!DOCTYPE html>
|
|||
videoView.pause();
|
||||
videoView.src = '';
|
||||
|
||||
// Laufende PDF-Session abbrechen.
|
||||
pdfSession = null;
|
||||
pdfCanvas.style.display = 'none';
|
||||
|
||||
[frame, imgView, videoView].forEach(function(el) {
|
||||
if (el.style.display !== 'none') {
|
||||
el.style.opacity = '0';
|
||||
|
|
@ -456,12 +433,29 @@ const playerHTML = `<!DOCTYPE html>
|
|||
rotateTimer = setTimeout(advanceOnce, ms);
|
||||
videoView.onended = advanceOnce;
|
||||
|
||||
} else if (type === 'pdf') {
|
||||
showPdf(item);
|
||||
|
||||
} else {
|
||||
// type === 'web' oder unbekannt → iframe
|
||||
if (frame.src !== item.src) { frame.src = item.src; }
|
||||
// type === 'web', 'pdf' oder unbekannt → iframe
|
||||
if (type === 'pdf') {
|
||||
frame.src = (function pdfUrl(src) {
|
||||
var defaults = {toolbar: '0', navpanes: '0', scrollbar: '0', view: 'Fit', page: '1'};
|
||||
var hashIdx = src.indexOf('#');
|
||||
var base = hashIdx >= 0 ? src.substring(0, hashIdx) : src;
|
||||
var existing = hashIdx >= 0 ? src.substring(hashIdx + 1) : '';
|
||||
var params = {};
|
||||
existing.split('&').forEach(function(p) {
|
||||
var kv = p.split('=');
|
||||
if (kv[0]) params[kv[0]] = kv[1] || '';
|
||||
});
|
||||
for (var k in defaults) {
|
||||
if (!(k in params)) params[k] = defaults[k];
|
||||
}
|
||||
var parts = [];
|
||||
for (var k in params) parts.push(k + '=' + params[k]);
|
||||
return base + '#' + parts.join('&');
|
||||
})(item.src);
|
||||
} else {
|
||||
if (frame.src !== item.src) { frame.src = item.src; }
|
||||
}
|
||||
frame.style.display = 'block';
|
||||
requestAnimationFrame(function() {
|
||||
requestAnimationFrame(function() { frame.style.opacity = '1'; });
|
||||
|
|
@ -492,83 +486,6 @@ const playerHTML = `<!DOCTYPE html>
|
|||
}
|
||||
}
|
||||
|
||||
// ── PDF.js Seitendurchblättern ────────────────────────────────────
|
||||
function showPdf(item) {
|
||||
if (typeof pdfjsLib === 'undefined') {
|
||||
// PDF.js nicht verfügbar → Fehler anzeigen
|
||||
showFrameError(item);
|
||||
return;
|
||||
}
|
||||
|
||||
// Graceful-Fallback-Timeout: falls PDF nicht innerhalb von 8s lädt → Fehler
|
||||
var loadTimeout = setTimeout(function() {
|
||||
if (pdfSession === session) {
|
||||
showFrameError(item);
|
||||
}
|
||||
}, 8000);
|
||||
|
||||
// Neue Session starten; alte wird durch pdfSession-Check invalidiert
|
||||
var session = {};
|
||||
pdfSession = session;
|
||||
|
||||
pdfCanvas.style.display = 'block';
|
||||
|
||||
pdfjsLib.getDocument(item.src).promise.then(function(pdf) {
|
||||
clearTimeout(loadTimeout);
|
||||
|
||||
// Session bereits abgebrochen?
|
||||
if (pdfSession !== session) { return; }
|
||||
|
||||
var numPages = pdf.numPages;
|
||||
var secsPerPage = Math.max(2, Math.floor((item.duration_seconds || 20) / numPages));
|
||||
var pageNum = 1;
|
||||
|
||||
function renderPage(n) {
|
||||
if (pdfSession !== session) { return; } // Session abgebrochen
|
||||
|
||||
pdf.getPage(n).then(function(page) {
|
||||
if (pdfSession !== session) { return; }
|
||||
|
||||
var baseViewport = page.getViewport({ scale: 1.0 });
|
||||
var scale = window.innerWidth / baseViewport.width;
|
||||
// Auch Höhe berücksichtigen damit die Seite vollständig sichtbar bleibt
|
||||
var scaleH = window.innerHeight / baseViewport.height;
|
||||
if (scaleH < scale) { scale = scaleH; }
|
||||
var viewport = page.getViewport({ scale: scale });
|
||||
|
||||
pdfCanvas.width = viewport.width;
|
||||
pdfCanvas.height = viewport.height;
|
||||
|
||||
var ctx = pdfCanvas.getContext('2d');
|
||||
page.render({ canvasContext: ctx, viewport: viewport }).promise.then(function() {
|
||||
if (pdfSession !== session) { return; }
|
||||
|
||||
// Nach secsPerPage Sekunden zur nächsten Seite
|
||||
rotateTimer = setTimeout(function() {
|
||||
if (pdfSession !== session) { return; }
|
||||
if (n < numPages) {
|
||||
renderPage(n + 1);
|
||||
} else {
|
||||
// Alle Seiten gezeigt → normale Rotation fortsetzen
|
||||
currentIdx = (currentIdx + 1) % items.length;
|
||||
showItem(items[currentIdx]);
|
||||
}
|
||||
}, secsPerPage * 1000);
|
||||
}).catch(function() {
|
||||
if (pdfSession === session) { showFrameError(item); }
|
||||
});
|
||||
}).catch(function() {
|
||||
if (pdfSession === session) { showFrameError(item); }
|
||||
});
|
||||
}
|
||||
|
||||
renderPage(pageNum);
|
||||
}).catch(function() {
|
||||
clearTimeout(loadTimeout);
|
||||
if (pdfSession === session) { showFrameError(item); }
|
||||
});
|
||||
}
|
||||
|
||||
function showFrameError(item) {
|
||||
hideAllContent();
|
||||
overlay.style.display = 'none';
|
||||
|
|
|
|||
|
|
@ -1,210 +0,0 @@
|
|||
// Package screenshot erzeugt periodisch Screenshots des aktuell angezeigten Inhalts
|
||||
// und sendet sie an den Backend-Server (Phase 6).
|
||||
//
|
||||
// Strategie (in dieser Reihenfolge):
|
||||
// 1. scrot -z -q 60 /tmp/morz-screenshot.jpg — leichtgewichtig, für X11
|
||||
// 2. import -window root /tmp/morz-screenshot.png — ImageMagick, falls scrot fehlt
|
||||
// 3. xwd -root -silent | convert xwd:- /tmp/morz-screenshot.jpg — Fallback
|
||||
//
|
||||
// Der Screenshot wird per HTTP MULTIPART POST an
|
||||
// POST /api/v1/player/screenshot gesendet.
|
||||
package screenshot
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"mime/multipart"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
screenshotPath = "/tmp/morz-screenshot.jpg"
|
||||
defaultInterval = 60 * time.Second
|
||||
uploadTimeout = 15 * time.Second
|
||||
screenshotQuality = "60" // JPEG quality (0-100)
|
||||
)
|
||||
|
||||
// Screenshotter erzeugt periodisch Screenshots und sendet sie an den Server.
|
||||
type Screenshotter struct {
|
||||
screenID string
|
||||
serverBaseURL string
|
||||
interval time.Duration
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
// New erzeugt einen neuen Screenshotter.
|
||||
func New(screenID, serverBaseURL string, intervalSeconds int, logger *log.Logger) *Screenshotter {
|
||||
interval := defaultInterval
|
||||
if intervalSeconds > 0 {
|
||||
interval = time.Duration(intervalSeconds) * time.Second
|
||||
}
|
||||
if logger == nil {
|
||||
logger = log.New(os.Stdout, "screenshot ", log.LstdFlags|log.LUTC)
|
||||
}
|
||||
return &Screenshotter{
|
||||
screenID: screenID,
|
||||
serverBaseURL: serverBaseURL,
|
||||
interval: interval,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
// Run startet die periodische Screenshot-Schleife und blockiert bis ctx abgebrochen wird.
|
||||
func (s *Screenshotter) Run(ctx context.Context) {
|
||||
ticker := time.NewTicker(s.interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
// Erster Screenshot nach kurzem Delay (damit Chromium hochgefahren ist).
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-time.After(10 * time.Second):
|
||||
}
|
||||
s.takeAndSend(ctx)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
s.takeAndSend(ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// takeAndSend erzeugt einen Screenshot und sendet ihn an den Server.
|
||||
func (s *Screenshotter) takeAndSend(ctx context.Context) {
|
||||
path, err := s.capture()
|
||||
if err != nil {
|
||||
s.logger.Printf("event=screenshot_capture_failed screen_id=%s err=%v", s.screenID, err)
|
||||
return
|
||||
}
|
||||
defer os.Remove(path) //nolint:errcheck
|
||||
|
||||
if err := s.upload(ctx, path); err != nil {
|
||||
s.logger.Printf("event=screenshot_upload_failed screen_id=%s err=%v", s.screenID, err)
|
||||
return
|
||||
}
|
||||
s.logger.Printf("event=screenshot_sent screen_id=%s", s.screenID)
|
||||
}
|
||||
|
||||
// capture erzeugt einen Screenshot mit dem ersten verfügbaren Tool.
|
||||
func (s *Screenshotter) capture() (string, error) {
|
||||
// Aufräumen falls eine alte Datei existiert.
|
||||
os.Remove(screenshotPath) //nolint:errcheck
|
||||
|
||||
// Versuch 1: scrot (leichtgewichtig, für X11)
|
||||
if path, err := tryScrot(); err == nil {
|
||||
return path, nil
|
||||
}
|
||||
|
||||
// Versuch 2: import (ImageMagick)
|
||||
if path, err := tryImport(); err == nil {
|
||||
return path, nil
|
||||
}
|
||||
|
||||
// Versuch 3: xwd + convert
|
||||
if path, err := tryXwd(); err == nil {
|
||||
return path, nil
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("keine Screenshot-Tool verfügbar (scrot, import, xwd)")
|
||||
}
|
||||
|
||||
func tryScrot() (string, error) {
|
||||
cmd := exec.Command("scrot", "-z", "-q", screenshotQuality, screenshotPath)
|
||||
if err := cmd.Run(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return screenshotPath, nil
|
||||
}
|
||||
|
||||
func tryImport() (string, error) {
|
||||
// ImageMagick import: -window root macht einen Screenshot des gesamten X-Displays.
|
||||
pngPath := "/tmp/morz-screenshot-tmp.png"
|
||||
cmd := exec.Command("import", "-window", "root", pngPath)
|
||||
if err := cmd.Run(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
// Zu JPEG konvertieren.
|
||||
cmd = exec.Command("convert", pngPath, "-quality", screenshotQuality, screenshotPath)
|
||||
defer os.Remove(pngPath) //nolint:errcheck
|
||||
if err := cmd.Run(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return screenshotPath, nil
|
||||
}
|
||||
|
||||
func tryXwd() (string, error) {
|
||||
xwdPath := "/tmp/morz-screenshot-tmp.xwd"
|
||||
// xwd schreibt in Datei.
|
||||
xwdCmd := exec.Command("xwd", "-root", "-silent", "-out", xwdPath)
|
||||
if err := xwdCmd.Run(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer os.Remove(xwdPath) //nolint:errcheck
|
||||
// convert xwd -> jpg.
|
||||
cmd := exec.Command("convert", "xwd:"+xwdPath, "-quality", screenshotQuality, screenshotPath)
|
||||
if err := cmd.Run(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return screenshotPath, nil
|
||||
}
|
||||
|
||||
// upload sendet den Screenshot per MULTIPART POST an den Server.
|
||||
func (s *Screenshotter) upload(ctx context.Context, path string) error {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("read screenshot: %w", err)
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
writer := multipart.NewWriter(&body)
|
||||
_ = writer.WriteField("screen_id", s.screenID)
|
||||
|
||||
ext := filepath.Ext(path)
|
||||
mimeType := "image/jpeg"
|
||||
if ext == ".png" {
|
||||
mimeType = "image/png"
|
||||
}
|
||||
|
||||
fw, err := writer.CreateFormFile("screenshot", "screenshot"+ext)
|
||||
if err != nil {
|
||||
return fmt.Errorf("create form file: %w", err)
|
||||
}
|
||||
if _, err := fw.Write(data); err != nil {
|
||||
return fmt.Errorf("write form file: %w", err)
|
||||
}
|
||||
_ = writer.WriteField("mime_type", mimeType)
|
||||
writer.Close()
|
||||
|
||||
uploadCtx, cancel := context.WithTimeout(ctx, uploadTimeout)
|
||||
defer cancel()
|
||||
|
||||
req, err := http.NewRequestWithContext(uploadCtx,
|
||||
http.MethodPost,
|
||||
s.serverBaseURL+"/api/v1/player/screenshot",
|
||||
&body,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Content-Type", writer.FormDataContentType())
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode >= 400 {
|
||||
return fmt.Errorf("server returned %d", resp.StatusCode)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -1,169 +1,26 @@
|
|||
# Backend
|
||||
|
||||
Dieses Verzeichnis enthaelt das zentrale Go-Backend fuer das Info-Board-System.
|
||||
Dieses Verzeichnis enthaelt das erste Geruest fuer das zentrale Backend.
|
||||
|
||||
## Aufgaben
|
||||
Ziel fuer die erste Ausbaustufe:
|
||||
|
||||
- HTTP-API und serverseitige HTML-UI (Bulma)
|
||||
- PostgreSQL-Anbindung mit automatischen Migrationen
|
||||
- Session-basierte Authentifizierung und rollenbasierte Zugriffskontrolle
|
||||
- Medienverwaltung und Playlist-Management
|
||||
- Player-Status-Ingest und Diagnose
|
||||
- MQTT-Notifizierungen bei Playlist-Aenderungen
|
||||
- HTTP-API in Go
|
||||
- Health-Endpunkt
|
||||
- saubere Projektstruktur fuer spaetere API-, Worker- und Datenbankmodule
|
||||
- erste serverseitige Aufloesungslogik fuer `message_wall`
|
||||
|
||||
## Unterstruktur
|
||||
Geplante Unterstruktur:
|
||||
|
||||
- `cmd/api/` — Startpunkt des Backends
|
||||
- `internal/app/` — App-Initialisierung und Lifecycle
|
||||
- `internal/config/` — Konfiguration via Umgebungsvariablen
|
||||
- `internal/db/` — PostgreSQL-Anbindung und Migrations-Runner
|
||||
- `internal/store/` — Datenbankzugriff (TenantStore, ScreenStore, MediaStore, PlaylistStore, AuthStore)
|
||||
- `internal/fileutil/` — Upload-Hilfsfunktionen (SaveUploadedFile mit Tenant-Isolation)
|
||||
- `internal/httpapi/` — HTTP-Routing, Middleware und Handler
|
||||
- `internal/httpapi/csrf.go` — Double-Submit-Cookie CSRF-Schutz
|
||||
- `internal/httpapi/ratelimit.go` — Rate-Limiting fuer /login (Brute-Force-Schutz)
|
||||
- `internal/httpapi/uploads.go` — Upload-Handler konsolidiert
|
||||
- `internal/httpapi/manage/` — Admin-UI und Playlist-Management-UI
|
||||
- `internal/httpapi/manage/csrf_helpers.go` — CSRF-Token Helpers fuer Templates
|
||||
- `internal/httpapi/tenant/` — Tenant-Self-Service-Dashboard
|
||||
- `internal/mqttnotifier/` — MQTT-Notifizierungen
|
||||
- `internal/reqcontext/` — Context-Keys fuer authentifizierten User
|
||||
- `cmd/api/` fuer den API-Startpunkt
|
||||
- `internal/app/` fuer App-Initialisierung
|
||||
- `internal/campaigns/` fuer Kampagnen- und Template-Logik
|
||||
- `internal/httpapi/` fuer HTTP-Routing und Handler
|
||||
- `internal/config/` fuer Konfiguration
|
||||
|
||||
## Datenbank-Stores
|
||||
Aktuell vorhanden:
|
||||
|
||||
### AuthStore (`internal/store/auth.go`)
|
||||
|
||||
**Screen-User Management:**
|
||||
- `CreateScreenUser(ctx, tenantID, username, passwordHash)` — neuen Screen-User anlegen
|
||||
- `ListScreenUsers(ctx, tenantID)` — alle Screen-User eines Tenants auflisten
|
||||
- `DeleteUser(ctx, userID)` — User und alle zugeordneten Permissions loeschen
|
||||
|
||||
**Authentifizierung:**
|
||||
- `GetUserByUsername(ctx, username)` — Nutzer per Username laden
|
||||
- `CreateSession(ctx, userID, ttl)` — neue Session anlegen
|
||||
- `GetSessionUser(ctx, sessionID)` — User zu gueltigem Session-Token laden
|
||||
- `DeleteSession(ctx, sessionID)` — Session loeschen (Logout)
|
||||
- `CleanExpiredSessions(ctx)` — abgelaufene Sessions bereinigen
|
||||
- `EnsureAdminUser(ctx, tenantSlug, password)` — Admin-User beim Start anlegen
|
||||
- `VerifyPassword(ctx, userID, password)` — Passwort gegen bcrypt-Hash pruefen
|
||||
|
||||
### ScreenStore (`internal/store/screen.go`)
|
||||
|
||||
**Screen-User Zugriffskontrolle:**
|
||||
- `GetAccessibleScreens(ctx, userID)` — alle Screens, auf die der User Zugriff hat
|
||||
- `HasUserScreenAccess(ctx, userID, screenID)` — prueft ob User auf Screen zugreifen darf (boolean)
|
||||
- `AddUserToScreen(ctx, userID, screenID)` — User zu Screen hinzufuegen (Eintrag in `user_screen_permissions`)
|
||||
- `RemoveUserFromScreen(ctx, userID, screenID)` — User von Screen entfernen
|
||||
- `GetScreenUsers(ctx, screenID)` — alle User, die auf Screen Zugriff haben
|
||||
|
||||
## Aktuelle Endpunkte
|
||||
|
||||
### Oeffentlich (keine Auth)
|
||||
|
||||
| Methode | Pfad | Beschreibung |
|
||||
|---------|-------------------------------------|---------------------------------------|
|
||||
| GET | `/healthz` | Health-Check |
|
||||
| GET | `/api/v1` | API-Entrypoint |
|
||||
| GET | `/api/v1/meta` | Metainformationen |
|
||||
| POST | `/api/v1/player/status` | Status-Ingest vom Player-Agent |
|
||||
| GET | `/api/v1/screens/status` | Uebersicht aller Screen-Status |
|
||||
| GET | `/api/v1/screens/{screenId}/status` | Einzelner Screen-Status |
|
||||
| DELETE | `/api/v1/screens/{screenId}/status` | Screen-Status loeschen |
|
||||
| GET | `/api/v1/screens/{screenId}/playlist` | Playlist fuer Player (kein Auth) |
|
||||
| POST | `/api/v1/screens/register` | Agent-Selbstregistrierung |
|
||||
| POST | `/api/v1/tools/message-wall/resolve`| Message-Wall-Aufloesungsendpunkt |
|
||||
| GET | `/status` | HTML-Diagnoseseite |
|
||||
| GET | `/status/{screenId}` | HTML-Detailseite Einzelscreen |
|
||||
| GET | `/uploads/{filename}` | Hochgeladene Dateien abrufen |
|
||||
| GET | `/static/bulma.min.css` | Statisches CSS |
|
||||
| GET | `/static/Sortable.min.js` | Statisches JS |
|
||||
| GET | `/login` | Login-Formular |
|
||||
| POST | `/login` | Login verarbeiten |
|
||||
| POST | `/logout` | Session beenden |
|
||||
|
||||
### Nur eingeloggte Benutzer (`RequireAuth`)
|
||||
|
||||
| Methode | Pfad | Beschreibung |
|
||||
|---------|-------------------------------------------|---------------------------------------|
|
||||
| GET | `/manage/{screenSlug}` | Playlist-Management-UI |
|
||||
| POST | `/manage/{screenSlug}/upload` | Medium fuer Screen hochladen |
|
||||
| POST | `/manage/{screenSlug}/items` | Item zur Playlist hinzufuegen |
|
||||
| POST | `/manage/{screenSlug}/items/{itemId}` | Item aktualisieren |
|
||||
| POST | `/manage/{screenSlug}/items/{itemId}/delete` | Item loeschen |
|
||||
| POST | `/manage/{screenSlug}/reorder` | Items reordnen |
|
||||
| POST | `/manage/{screenSlug}/media/{mediaId}/delete` | Medium loeschen |
|
||||
| GET | `/api/v1/playlists/{screenId}` | Playlist mit Metadaten abrufen |
|
||||
| POST | `/api/v1/playlists/{playlistId}/items` | Item zur Playlist hinzufuegen (API) |
|
||||
| PATCH | `/api/v1/items/{itemId}` | Item aktualisieren (API) |
|
||||
| DELETE | `/api/v1/items/{itemId}` | Item loeschen (API) |
|
||||
| PUT | `/api/v1/playlists/{playlistId}/order` | Items reordnen (API) |
|
||||
| PATCH | `/api/v1/playlists/{playlistId}/duration` | Standard-Dauer setzen (API) |
|
||||
| DELETE | `/api/v1/media/{id}` | Medium loeschen (API) |
|
||||
|
||||
### Nur Admins (`RequireAuth` + `RequireAdmin`)
|
||||
|
||||
| Methode | Pfad | Beschreibung |
|
||||
|---------|-------------------------------------------|---------------------------------------|
|
||||
| GET | `/admin` | Admin-Uebersicht |
|
||||
| POST | `/admin/screens/provision` | Provisionierungs-Job starten |
|
||||
| POST | `/admin/screens` | Neuen Screen anlegen |
|
||||
| POST | `/admin/screens/{screenId}/delete` | Screen loeschen |
|
||||
| POST | `/admin/users` | Screen-User anlegen |
|
||||
| POST | `/admin/users/{userID}/delete` | Screen-User loeschen |
|
||||
| POST | `/admin/screens/{screenID}/users` | User zu Screen hinzufuegen |
|
||||
| POST | `/admin/screens/{screenID}/users/{userID}/remove` | User von Screen entfernen |
|
||||
|
||||
### Tenant-scoped (`RequireAuth` + `RequireTenantAccess`)
|
||||
|
||||
| Methode | Pfad | Beschreibung |
|
||||
|---------|---------------------------------------------------|---------------------------------|
|
||||
| GET | `/tenant/{tenantSlug}/dashboard` | Tenant-Self-Service-Dashboard |
|
||||
| POST | `/tenant/{tenantSlug}/upload` | Medium hochladen |
|
||||
| POST | `/tenant/{tenantSlug}/media/{mediaId}/delete` | Medium loeschen |
|
||||
| GET | `/api/v1/tenants/{tenantSlug}/screens` | Screens eines Tenants auflisten |
|
||||
| POST | `/api/v1/tenants/{tenantSlug}/screens` | Screen anlegen |
|
||||
| GET | `/api/v1/tenants/{tenantSlug}/media` | Medien eines Tenants auflisten |
|
||||
| POST | `/api/v1/tenants/{tenantSlug}/media` | Medium hochladen (API) |
|
||||
|
||||
## Konfiguration
|
||||
|
||||
Alle Werte per Umgebungsvariable:
|
||||
|
||||
| Variable | Bedeutung | Standard |
|
||||
|-----------------------------------|----------------------------------------------------------|---------------|
|
||||
| `MORZ_INFOBOARD_HTTP_ADDR` | HTTP-Listen-Adresse | `:8080` |
|
||||
| `DATABASE_URL` | PostgreSQL-Connection-String | — |
|
||||
| `MORZ_INFOBOARD_UPLOAD_DIR` | Verzeichnis fuer hochgeladene Medien | `/tmp/morz-uploads` |
|
||||
| `MORZ_INFOBOARD_STATUS_STORE_PATH`| Pfad zur JSON-Persistenz-Datei fuer Status-Store | leer (in-memory) |
|
||||
| `MORZ_INFOBOARD_ADMIN_PASSWORD` | Passwort des initialen Admin-Users (leer = kein Anlegen) | leer |
|
||||
| `MORZ_INFOBOARD_DEFAULT_TENANT` | Slug des Tenants, dem der Admin zugeordnet wird | `morz` |
|
||||
| `MORZ_INFOBOARD_DEV_MODE` | `true` = Session-Cookie ohne Secure-Flag (nur lokal) | `false` |
|
||||
| `MORZ_INFOBOARD_REGISTER_SECRET` | Pre-Shared-Secret fuer POST /api/v1/screens/register | leer |
|
||||
| `MORZ_INFOBOARD_MQTT_BROKER` | MQTT-Broker-URL (leer = kein MQTT) | leer |
|
||||
| `MORZ_INFOBOARD_MQTT_USERNAME` | MQTT-Benutzername | leer |
|
||||
| `MORZ_INFOBOARD_MQTT_PASSWORD` | MQTT-Passwort | leer |
|
||||
|
||||
Detailliertere Beschreibung und lokale Startbeispiele: `DEVELOPMENT.md`.
|
||||
|
||||
## Middleware
|
||||
|
||||
### `RequireScreenAccess`
|
||||
|
||||
Middleware zur rollenbasierten Zugriffskontrolle auf Screen-Ressourcen.
|
||||
|
||||
**Verhalten:**
|
||||
- Admins duerfen auf alle Screens zugreifen
|
||||
- Screen-User duerfen nur auf Screens zugreifen, fuer die sie in `user_screen_permissions` eingetragen sind
|
||||
- Tenant-User duerfen auf alle Screens ihres Tenants zugreifen
|
||||
- Response: `403 Forbidden` wenn keine Berechtigung
|
||||
|
||||
**Verwendung:**
|
||||
- `GET /api/v1/screens/{screenId}/playlist`
|
||||
- `POST /manage/{screenSlug}/...`
|
||||
- Alle privaten Screen-Endpunkte
|
||||
|
||||
## Migrationen
|
||||
|
||||
- `001_core.sql` — initiales Schema (Tenants, Screens, Playlists, Media, etc.)
|
||||
- `002_auth.sql` — Auth-Tabellen (`users`, `sessions`)
|
||||
- `003_user_screen_permissions.sql` — Screen-User Management (`user_screen_permissions`)
|
||||
- `GET /healthz`
|
||||
- `GET /api/v1`
|
||||
- `GET /api/v1/meta`
|
||||
- `POST /api/v1/tools/message-wall/resolve` als erste serverseitige Layout-Aufloesung fuer `message_wall`
|
||||
- einheitliches API-Fehlerformat im HTTP-Layer
|
||||
|
|
|
|||
|
|
@ -2,31 +2,22 @@ package main
|
|||
|
||||
import (
|
||||
"log"
|
||||
"log/slog"
|
||||
"os"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/app"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// V6: Strukturiertes JSON-Logging als Standard-Logger.
|
||||
// Alle slog.Info/slog.Error-Aufrufe im Programm nutzen diesen Handler.
|
||||
slogHandler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
|
||||
Level: slog.LevelInfo,
|
||||
})
|
||||
slog.SetDefault(slog.New(slogHandler))
|
||||
|
||||
// Kompatibilitäts-Logger für Komponenten die noch *log.Logger erwarten.
|
||||
stdLogger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
||||
logger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
||||
|
||||
application, err := app.New()
|
||||
if err != nil {
|
||||
stdLogger.Fatalf("init app: %v", err)
|
||||
logger.Fatalf("init app: %v", err)
|
||||
}
|
||||
|
||||
slog.Info("backend starting", "addr", application.Config.HTTPAddress)
|
||||
logger.Printf("starting backend on %s", application.Config.HTTPAddress)
|
||||
|
||||
if err := application.Run(); err != nil {
|
||||
stdLogger.Fatalf("run backend: %v", err)
|
||||
logger.Fatalf("run backend: %v", err)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,12 +6,8 @@ import (
|
|||
"encoding/hex"
|
||||
"errors"
|
||||
"log"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/config"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/db"
|
||||
|
|
@ -21,17 +17,13 @@ import (
|
|||
)
|
||||
|
||||
type App struct {
|
||||
Config config.Config
|
||||
server *http.Server
|
||||
notifier *mqttnotifier.Notifier
|
||||
authStore *store.AuthStore
|
||||
dbPool *db.Pool // V7: für db.Close() im Shutdown
|
||||
logger *log.Logger
|
||||
Config config.Config
|
||||
server *http.Server
|
||||
notifier *mqttnotifier.Notifier
|
||||
}
|
||||
|
||||
func New() (*App, error) {
|
||||
cfg := config.Load()
|
||||
// Kompatibilitäts-Logger für db.Connect (erwartet *log.Logger).
|
||||
logger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
||||
|
||||
// Ensure upload directory exists.
|
||||
|
|
@ -68,20 +60,19 @@ func New() (*App, error) {
|
|||
return nil, err
|
||||
}
|
||||
adminPassword = hex.EncodeToString(buf)
|
||||
// V6: slog statt log.Printf — Passwort nie loggen (K5).
|
||||
slog.Info("admin password generated", "event", "admin_password_generated", "password", "[gesetzt]")
|
||||
logger.Printf("event=admin_password_generated password=%s", adminPassword)
|
||||
}
|
||||
if err := authStore.EnsureAdminUser(context.Background(), cfg.DefaultTenantSlug, adminPassword); err != nil {
|
||||
slog.Error("ensure admin user failed", "event", "ensure_admin_user_failed", "err", err)
|
||||
logger.Printf("event=ensure_admin_user_failed err=%v", err)
|
||||
// Non-fatal: server starts even if admin setup fails.
|
||||
}
|
||||
|
||||
// MQTT notifier (no-op when broker not configured).
|
||||
notifier := mqttnotifier.New(cfg.MQTTBroker, cfg.MQTTUsername, cfg.MQTTPassword)
|
||||
if cfg.MQTTBroker != "" {
|
||||
slog.Info("mqtt notifier enabled", "event", "mqtt_notifier_enabled", "broker", cfg.MQTTBroker)
|
||||
logger.Printf("event=mqtt_notifier_enabled broker=%s", cfg.MQTTBroker)
|
||||
} else {
|
||||
slog.Info("mqtt notifier disabled", "event", "mqtt_notifier_disabled", "reason", "no_broker_configured")
|
||||
logger.Printf("event=mqtt_notifier_disabled reason=no_broker_configured")
|
||||
}
|
||||
|
||||
handler := httpapi.NewRouter(httpapi.RouterDeps{
|
||||
|
|
@ -98,61 +89,14 @@ func New() (*App, error) {
|
|||
})
|
||||
|
||||
return &App{
|
||||
Config: cfg,
|
||||
server: &http.Server{Addr: cfg.HTTPAddress, Handler: handler},
|
||||
notifier: notifier,
|
||||
authStore: authStore,
|
||||
dbPool: pool, // V7: Referenz für Shutdown
|
||||
logger: logger,
|
||||
Config: cfg,
|
||||
server: &http.Server{Addr: cfg.HTTPAddress, Handler: handler},
|
||||
notifier: notifier,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (a *App) Run() error {
|
||||
defer a.notifier.Close()
|
||||
|
||||
// W2+V7: Graceful Shutdown mit Signal-Handling.
|
||||
// Der Context wird bei SIGTERM/SIGINT abgebrochen, was den Shutdown einleitet.
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// Session-Cleanup: expired sessions werden stündlich aus der DB entfernt.
|
||||
go func() {
|
||||
ticker := time.NewTicker(1 * time.Hour)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if err := a.authStore.CleanExpiredSessions(ctx); err != nil {
|
||||
slog.Error("session cleanup failed", "event", "session_cleanup_failed", "err", err)
|
||||
} else {
|
||||
slog.Info("session cleanup ok", "event", "session_cleanup_ok")
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// W2: Signal-Handler für Graceful Shutdown.
|
||||
sigCh := make(chan os.Signal, 1)
|
||||
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGINT)
|
||||
go func() {
|
||||
sig := <-sigCh
|
||||
slog.Info("shutdown signal received", "event", "shutdown_signal", "signal", sig.String())
|
||||
cancel() // Session-Cleanup stoppen.
|
||||
|
||||
// HTTP-Server mit Timeout herunterfahren.
|
||||
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer shutdownCancel()
|
||||
if err := a.server.Shutdown(shutdownCtx); err != nil {
|
||||
slog.Error("shutdown error", "event", "shutdown_error", "err", err)
|
||||
}
|
||||
|
||||
// V7: DB-Pool schließen.
|
||||
a.dbPool.Close()
|
||||
slog.Info("shutdown complete", "event", "shutdown_complete")
|
||||
}()
|
||||
|
||||
err := a.server.ListenAndServe()
|
||||
if errors.Is(err, http.ErrServerClosed) {
|
||||
return nil
|
||||
|
|
|
|||
|
|
@ -15,10 +15,6 @@ type Config struct {
|
|||
AdminPassword string // MORZ_INFOBOARD_ADMIN_PASSWORD
|
||||
DefaultTenantSlug string // MORZ_INFOBOARD_DEFAULT_TENANT (default: "morz")
|
||||
DevMode bool // MORZ_INFOBOARD_DEV_MODE — when true, session cookie works without HTTPS
|
||||
// RegisterSecret schützt POST /api/v1/screens/register (K6).
|
||||
// Wenn gesetzt, muss der Player den Header X-Register-Secret: <secret> senden.
|
||||
// Wenn leer, ist der Endpoint für alle erreichbar (Rückwärtskompatibilität).
|
||||
RegisterSecret string // MORZ_INFOBOARD_REGISTER_SECRET
|
||||
}
|
||||
|
||||
func Load() Config {
|
||||
|
|
@ -33,7 +29,6 @@ func Load() Config {
|
|||
AdminPassword: os.Getenv("MORZ_INFOBOARD_ADMIN_PASSWORD"),
|
||||
DefaultTenantSlug: getenv("MORZ_INFOBOARD_DEFAULT_TENANT", "morz"),
|
||||
DevMode: os.Getenv("MORZ_INFOBOARD_DEV_MODE") == "true",
|
||||
RegisterSecret: os.Getenv("MORZ_INFOBOARD_REGISTER_SECRET"),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,22 +0,0 @@
|
|||
-- Migration 003: Screen-User-Berechtigungssystem
|
||||
-- Fügt die Rolle 'screen_user' und die M:N Tabelle user_screen_permissions hinzu.
|
||||
|
||||
-- Neue Spalte 'role' in users (DEFAULT 'screen_user' für zukünftige Nutzer).
|
||||
ALTER TABLE users ADD COLUMN IF NOT EXISTS role TEXT DEFAULT 'screen_user';
|
||||
|
||||
-- Bestehende Admins auf 'admin' setzen (alle User im Standard-Tenant morz).
|
||||
UPDATE users SET role = 'admin'
|
||||
WHERE tenant_id = (SELECT id FROM tenants WHERE slug = 'morz')
|
||||
AND role IS DISTINCT FROM 'admin';
|
||||
|
||||
-- M:N-Tabelle: welche User dürfen welche Screens verwalten.
|
||||
CREATE TABLE IF NOT EXISTS user_screen_permissions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id TEXT NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
screen_id TEXT NOT NULL REFERENCES screens(id) ON DELETE CASCADE,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(user_id, screen_id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_user_screen_perms_user ON user_screen_permissions(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_screen_perms_screen ON user_screen_permissions(screen_id);
|
||||
|
|
@ -1,68 +0,0 @@
|
|||
// Package fileutil enthält gemeinsame Datei-Hilfsfunktionen für Upload-Handler (V1, N6).
|
||||
package fileutil
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SaveUploadedFile speichert einen Datei-Stream in uploadDir/{tenantSlug}/ und
|
||||
// gibt den relativen HTTP-Pfad (/uploads/{tenantSlug}/filename) sowie die
|
||||
// Anzahl geschriebener Bytes zurück.
|
||||
//
|
||||
// V1: Gemeinsame Upload-Logik — ersetzt 3× duplizierte Implementierung.
|
||||
// N6: Tenant-spezifisches Verzeichnis statt gemeinsamer Ablage.
|
||||
func SaveUploadedFile(file io.Reader, originalFilename, title, uploadDir, tenantSlug string) (storagePath string, size int64, err error) {
|
||||
safeSlug := sanitize(tenantSlug)
|
||||
if safeSlug == "" {
|
||||
safeSlug = "default"
|
||||
}
|
||||
tenantDir := filepath.Join(uploadDir, safeSlug)
|
||||
if mkErr := os.MkdirAll(tenantDir, 0755); mkErr != nil {
|
||||
return "", 0, fmt.Errorf("fileutil: mkdir %s: %w", tenantDir, mkErr)
|
||||
}
|
||||
|
||||
ext := filepath.Ext(originalFilename)
|
||||
safeTitle := sanitize(title)
|
||||
if safeTitle == "" {
|
||||
safeTitle = "file"
|
||||
}
|
||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), safeTitle, ext)
|
||||
destPath := filepath.Join(tenantDir, filename)
|
||||
|
||||
dest, createErr := os.Create(destPath)
|
||||
if createErr != nil {
|
||||
return "", 0, fmt.Errorf("fileutil: create %s: %w", destPath, createErr)
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
n, copyErr := io.Copy(dest, file)
|
||||
if copyErr != nil {
|
||||
os.Remove(destPath) //nolint:errcheck
|
||||
return "", 0, fmt.Errorf("fileutil: write %s: %w", destPath, copyErr)
|
||||
}
|
||||
|
||||
return "/uploads/" + safeSlug + "/" + filename, n, nil
|
||||
}
|
||||
|
||||
// sanitize konvertiert einen String in einen sicheren Dateinamen-Bestandteil
|
||||
// (nur a-z, A-Z, 0-9, -, _; maximal 40 Zeichen).
|
||||
func sanitize(s string) string {
|
||||
var b strings.Builder
|
||||
for _, r := range s {
|
||||
if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || r == '-' || r == '_' {
|
||||
b.WriteRune(r)
|
||||
} else {
|
||||
b.WriteRune('_')
|
||||
}
|
||||
}
|
||||
out := b.String()
|
||||
if len(out) > 40 {
|
||||
out = out[:40]
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
|
@ -1,98 +0,0 @@
|
|||
package httpapi
|
||||
|
||||
// csrf.go — Double-Submit-Cookie CSRF-Schutz (K1) und neuteredFileSystem (N5).
|
||||
//
|
||||
// Jede sichere State-ändernde Anfrage (POST/PUT/PATCH/DELETE) muss:
|
||||
// 1. Den Cookie „morz_csrf" enthalten.
|
||||
// 2. Den gleichen Wert als Form-Feld „csrf_token" oder Header „X-CSRF-Token" mitsenden.
|
||||
//
|
||||
// Token-Erzeugung: beim Rendern der Login-/Manage-Seiten wird SetCSRFCookie aufgerufen.
|
||||
// Token-Validierung: CSRFProtect-Middleware prüft, ob Cookie und Payload übereinstimmen.
|
||||
//
|
||||
// SameSite=Lax schützt bereits gegen die meisten CSRF-Angriffe aus anderen Domains,
|
||||
// aber das Double-Submit-Pattern bietet zusätzlichen Schutz für Formulare die per GET
|
||||
// auf anderen Seiten eingebettet werden könnten.
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
const (
|
||||
csrfCookieName = "morz_csrf"
|
||||
csrfFieldName = "csrf_token"
|
||||
csrfHeaderName = "X-CSRF-Token"
|
||||
)
|
||||
|
||||
// GenerateCSRFToken erzeugt ein 32-Byte zufälliges Hex-Token.
|
||||
func GenerateCSRFToken() (string, error) {
|
||||
buf := make([]byte, 32)
|
||||
if _, err := rand.Read(buf); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return hex.EncodeToString(buf), nil
|
||||
}
|
||||
|
||||
// SetCSRFCookie setzt (oder erneuert) den CSRF-Cookie im Response.
|
||||
// Gibt das Token zurück, damit es in Template-Daten eingebettet werden kann.
|
||||
func SetCSRFCookie(w http.ResponseWriter, r *http.Request, devMode bool) string {
|
||||
// Existierendes Token wiederverwenden, wenn vorhanden.
|
||||
if c, err := r.Cookie(csrfCookieName); err == nil && c.Value != "" {
|
||||
return c.Value
|
||||
}
|
||||
token, err := GenerateCSRFToken()
|
||||
if err != nil {
|
||||
// Im Fehlerfall leeres Token — Handler müssen diesen Fall prüfen.
|
||||
return ""
|
||||
}
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: csrfCookieName,
|
||||
Value: token,
|
||||
Path: "/",
|
||||
HttpOnly: false, // JavaScript darf nicht lesen, aber das ist ein Cookie-read, kein DOM-access
|
||||
Secure: !devMode,
|
||||
SameSite: http.SameSiteLaxMode,
|
||||
MaxAge: int((8 * 3600)), // 8h — entspricht sessionTTL
|
||||
})
|
||||
return token
|
||||
}
|
||||
|
||||
// CSRFTokenFromRequest liest das CSRF-Token aus Form-Feld oder Header.
|
||||
func CSRFTokenFromRequest(r *http.Request) string {
|
||||
// Header hat Vorrang (API-Clients).
|
||||
if h := r.Header.Get(csrfHeaderName); h != "" {
|
||||
return h
|
||||
}
|
||||
// Form-Feld (HTML-Formulare).
|
||||
return r.FormValue(csrfFieldName)
|
||||
}
|
||||
|
||||
// CSRFProtect ist Middleware für POST/PUT/PATCH/DELETE-Requests.
|
||||
// Sie prüft, ob das CSRF-Token im Request mit dem Cookie übereinstimmt.
|
||||
// GET-/HEAD-/OPTIONS-Requests werden durchgelassen.
|
||||
func CSRFProtect(devMode bool) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
switch r.Method {
|
||||
case http.MethodGet, http.MethodHead, http.MethodOptions, http.MethodTrace:
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
cookie, err := r.Cookie(csrfCookieName)
|
||||
if err != nil || cookie.Value == "" {
|
||||
http.Error(w, "CSRF-Token fehlt (Cookie)", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
token := CSRFTokenFromRequest(r)
|
||||
if token == "" || token != cookie.Value {
|
||||
http.Error(w, "Ungültiger CSRF-Token", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -8,63 +8,43 @@ import (
|
|||
"time"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/config"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
)
|
||||
|
||||
// handleScreenUserRedirect looks up accessible screens for a screen_user and
|
||||
// redirects to the first one. If none exist, it redirects to an error page.
|
||||
func handleScreenUserRedirect(w http.ResponseWriter, r *http.Request, screenStore *store.ScreenStore, user *store.User) {
|
||||
screens, err := screenStore.GetAccessibleScreens(r.Context(), user.ID)
|
||||
if err != nil || len(screens) == 0 {
|
||||
http.Redirect(w, r, "/login?error=no_screens", http.StatusSeeOther)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/manage/"+screens[0].Slug, http.StatusSeeOther)
|
||||
}
|
||||
|
||||
const sessionTTL = 8 * time.Hour
|
||||
|
||||
// sessionCookieName ist ein Alias auf die zentrale Konstante (V5).
|
||||
const sessionCookieName = reqcontext.SessionCookieName
|
||||
const (
|
||||
sessionCookieName = "morz_session"
|
||||
sessionTTL = 8 * time.Hour
|
||||
)
|
||||
|
||||
// loginData is the template data for the login page.
|
||||
type loginData struct {
|
||||
Error string
|
||||
Next string
|
||||
CSRFToken string
|
||||
Error string
|
||||
Next string
|
||||
}
|
||||
|
||||
// HandleLoginUI renders the login form (GET /login).
|
||||
// If a valid session cookie is already present, the user is redirected based on role.
|
||||
func HandleLoginUI(authStore *store.AuthStore, screenStore *store.ScreenStore, cfg config.Config) http.HandlerFunc {
|
||||
// If a valid session cookie is already present, the user is redirected to /admin
|
||||
// (or the tenant dashboard once tenants are wired up in Phase 3).
|
||||
func HandleLoginUI(authStore *store.AuthStore) http.HandlerFunc {
|
||||
tmpl := template.Must(template.New("login").Parse(loginTmpl))
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
// Redirect if already logged in.
|
||||
if cookie, err := r.Cookie(sessionCookieName); err == nil {
|
||||
if u, err := authStore.GetSessionUser(r.Context(), cookie.Value); err == nil {
|
||||
switch u.Role {
|
||||
case "admin":
|
||||
if u.Role == "admin" {
|
||||
http.Redirect(w, r, "/admin", http.StatusSeeOther)
|
||||
} else if u.TenantSlug != "" {
|
||||
http.Redirect(w, r, "/manage/"+u.TenantSlug, http.StatusSeeOther)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin", http.StatusSeeOther)
|
||||
case "screen_user":
|
||||
handleScreenUserRedirect(w, r, screenStore, u)
|
||||
default:
|
||||
if u.TenantSlug != "" {
|
||||
http.Redirect(w, r, "/tenant/"+u.TenantSlug+"/dashboard", http.StatusSeeOther)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin", http.StatusSeeOther)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// K1: CSRF-Token für das Login-Formular setzen/erneuern.
|
||||
csrfToken := setCSRFCookie(w, r, cfg.DevMode)
|
||||
|
||||
next := r.URL.Query().Get("next")
|
||||
data := loginData{Next: sanitizeNext(next), CSRFToken: csrfToken}
|
||||
data := loginData{Next: sanitizeNext(next)}
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
_ = tmpl.Execute(w, data)
|
||||
}
|
||||
|
|
@ -73,21 +53,16 @@ func HandleLoginUI(authStore *store.AuthStore, screenStore *store.ScreenStore, c
|
|||
// HandleLoginPost handles form submission (POST /login).
|
||||
// It validates credentials, creates a session, sets the session cookie and
|
||||
// redirects the user based on their role or the ?next= parameter.
|
||||
func HandleLoginPost(authStore *store.AuthStore, screenStore *store.ScreenStore, cfg config.Config) http.HandlerFunc {
|
||||
func HandleLoginPost(authStore *store.AuthStore, cfg config.Config) http.HandlerFunc {
|
||||
tmpl := template.Must(template.New("login").Parse(loginTmpl))
|
||||
|
||||
renderError := func(w http.ResponseWriter, next, msg string) {
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
w.WriteHeader(http.StatusUnauthorized)
|
||||
_ = tmpl.Execute(w, loginData{Error: msg, Next: next})
|
||||
}
|
||||
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
// K1: CSRF-Cookie erneuern und Token für Fehler-Re-Rendering bereitstellen.
|
||||
// Das Token muss auch bei Fehlerantworten im Hidden-Field stehen, damit
|
||||
// der nächste Submit-Versuch den CSRF-Check besteht.
|
||||
csrfToken := setCSRFCookie(w, r, cfg.DevMode)
|
||||
|
||||
renderError := func(w http.ResponseWriter, next, msg string) {
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
w.WriteHeader(http.StatusUnauthorized)
|
||||
_ = tmpl.Execute(w, loginData{Error: msg, Next: next, CSRFToken: csrfToken})
|
||||
}
|
||||
|
||||
if err := r.ParseForm(); err != nil {
|
||||
renderError(w, "", "Ungültige Anfrage.")
|
||||
return
|
||||
|
|
@ -104,12 +79,7 @@ func HandleLoginPost(authStore *store.AuthStore, screenStore *store.ScreenStore,
|
|||
|
||||
user, err := authStore.GetUserByUsername(r.Context(), username)
|
||||
if err != nil {
|
||||
// Mitigate user-enumeration timing leak: run a dummy bcrypt
|
||||
// comparison so that unknown-user and wrong-password responses
|
||||
// take approximately the same time. The dummy hash is a
|
||||
// pre-computed bcrypt hash of "dummy" (cost 12).
|
||||
const dummyHash = "$2a$12$44H3KPmJUDdgNss7JB7Qneg9GWEl2OgxWwSqVpXRaQdki8T3U9ED2"
|
||||
_ = bcrypt.CompareHashAndPassword([]byte(dummyHash), []byte(password))
|
||||
// Constant-time failure — same message for unknown user and wrong password.
|
||||
renderError(w, next, "Benutzername oder Passwort falsch.")
|
||||
return
|
||||
}
|
||||
|
|
@ -143,11 +113,9 @@ func HandleLoginPost(authStore *store.AuthStore, screenStore *store.ScreenStore,
|
|||
switch user.Role {
|
||||
case "admin":
|
||||
http.Redirect(w, r, "/admin", http.StatusSeeOther)
|
||||
case "screen_user":
|
||||
handleScreenUserRedirect(w, r, screenStore, user)
|
||||
default:
|
||||
if user.TenantSlug != "" {
|
||||
http.Redirect(w, r, "/tenant/"+user.TenantSlug+"/dashboard", http.StatusSeeOther)
|
||||
http.Redirect(w, r, "/manage/"+user.TenantSlug, http.StatusSeeOther)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin", http.StatusSeeOther)
|
||||
}
|
||||
|
|
@ -156,22 +124,19 @@ func HandleLoginPost(authStore *store.AuthStore, screenStore *store.ScreenStore,
|
|||
}
|
||||
|
||||
// HandleLogoutPost deletes the session and clears the cookie (POST /logout).
|
||||
func HandleLogoutPost(authStore *store.AuthStore, cfg config.Config) http.HandlerFunc {
|
||||
func HandleLogoutPost(authStore *store.AuthStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
if cookie, err := r.Cookie(sessionCookieName); err == nil {
|
||||
_ = authStore.DeleteSession(r.Context(), cookie.Value)
|
||||
}
|
||||
|
||||
// Expire the cookie immediately.
|
||||
// Secure must match the flag used when the cookie was set so that
|
||||
// browsers on HTTPS connections honour the expiry directive.
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: sessionCookieName,
|
||||
Value: "",
|
||||
Path: "/",
|
||||
MaxAge: -1,
|
||||
HttpOnly: true,
|
||||
Secure: !cfg.DevMode,
|
||||
SameSite: http.SameSiteLaxMode,
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -1,44 +0,0 @@
|
|||
package manage
|
||||
|
||||
// csrf_helpers.go — Hilfsfunktionen für CSRF im manage-Package (K1).
|
||||
//
|
||||
// Das manage-Package darf httpapi nicht importieren (würde einen Import-Cycle erzeugen).
|
||||
// Deshalb sind die minimalen CSRF-Hilfsfunktionen hier dupliziert.
|
||||
// Die eigentliche CSRF-Middleware lebt in httpapi/csrf.go.
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
const (
|
||||
csrfCookieName = "morz_csrf"
|
||||
// CSRFFieldName ist der Name des versteckten Form-Felds mit dem CSRF-Token.
|
||||
// Wird in Templates als {{.CSRFToken}} eingebettet.
|
||||
CSRFFieldName = "csrf_token"
|
||||
)
|
||||
|
||||
// setCSRFCookie setzt (oder erneuert) den CSRF-Cookie und gibt das Token zurück.
|
||||
// Wird von Handlern aufgerufen, die GET-Seiten mit Formularen rendern.
|
||||
func setCSRFCookie(w http.ResponseWriter, r *http.Request, devMode bool) string {
|
||||
// Existierendes Token wiederverwenden.
|
||||
if c, err := r.Cookie(csrfCookieName); err == nil && c.Value != "" {
|
||||
return c.Value
|
||||
}
|
||||
buf := make([]byte, 32)
|
||||
if _, err := rand.Read(buf); err != nil {
|
||||
return ""
|
||||
}
|
||||
token := hex.EncodeToString(buf)
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: csrfCookieName,
|
||||
Value: token,
|
||||
Path: "/",
|
||||
HttpOnly: false, // muss von JS nicht gelesen werden; Formulare nutzen das versteckte Feld
|
||||
Secure: !devMode,
|
||||
SameSite: http.SameSiteLaxMode,
|
||||
MaxAge: 8 * 3600, // 8h
|
||||
})
|
||||
return token
|
||||
}
|
||||
|
|
@ -2,13 +2,14 @@ package manage
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/fileutil"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||
)
|
||||
|
||||
|
|
@ -45,8 +46,6 @@ func HandleUploadMedia(tenants *store.TenantStore, media *store.MediaStore, uplo
|
|||
}
|
||||
tenantID := tenant.ID
|
||||
|
||||
// W3: MaxBytesReader begrenzt den gesamten Request-Body auf maxUploadSize.
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxUploadSize)
|
||||
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
||||
http.Error(w, "request too large or not multipart", http.StatusBadRequest)
|
||||
return
|
||||
|
|
@ -91,15 +90,31 @@ func HandleUploadMedia(tenants *store.TenantStore, media *store.MediaStore, uplo
|
|||
title = strings.TrimSuffix(header.Filename, filepath.Ext(header.Filename))
|
||||
}
|
||||
|
||||
// V1+N6: Gemeinsame Upload-Funktion, tenant-spezifisches Verzeichnis.
|
||||
storagePath, size, err := fileutil.SaveUploadedFile(file, header.Filename, title, uploadDir, r.PathValue("tenantSlug"))
|
||||
// Generate unique storage path.
|
||||
ext := filepath.Ext(header.Filename)
|
||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), sanitize(title), ext)
|
||||
destPath := filepath.Join(uploadDir, filename)
|
||||
|
||||
dest, err := os.Create(destPath)
|
||||
if err != nil {
|
||||
http.Error(w, "storage error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
size, err := io.Copy(dest, file)
|
||||
if err != nil {
|
||||
os.Remove(destPath) //nolint:errcheck
|
||||
http.Error(w, "write error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Storage path relative (served via /uploads/).
|
||||
storagePath := "/uploads/" + filename
|
||||
|
||||
asset, err := media.Create(r.Context(), tenantID, title, assetType, storagePath, "", mimeType, size)
|
||||
if err != nil {
|
||||
os.Remove(destPath) //nolint:errcheck
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
|
@ -123,17 +138,6 @@ func HandleDeleteMedia(media *store.MediaStore, uploadDir string) http.HandlerFu
|
|||
return
|
||||
}
|
||||
|
||||
// K3: Tenant-Check — nur der eigene Tenant oder Admin darf löschen.
|
||||
u := reqcontext.UserFromContext(r.Context())
|
||||
if u == nil {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
if u.Role != "admin" && u.TenantID != asset.TenantID {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
// Delete physical file if it's a local upload.
|
||||
if asset.StoragePath != "" {
|
||||
filename := filepath.Base(asset.StoragePath)
|
||||
|
|
|
|||
|
|
@ -9,28 +9,9 @@ import (
|
|||
"time"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/mqttnotifier"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||
)
|
||||
|
||||
// requirePlaylistAccess prüft ob der eingeloggte User zur Playlist-Tenant gehört.
|
||||
// Gibt true zurück wenn Zugriff erlaubt; schreibt 403 und gibt false zurück wenn nicht.
|
||||
func requirePlaylistAccess(w http.ResponseWriter, r *http.Request, playlist *store.Playlist) bool {
|
||||
u := reqcontext.UserFromContext(r.Context())
|
||||
if u == nil {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return false
|
||||
}
|
||||
if u.Role == "admin" {
|
||||
return true
|
||||
}
|
||||
if u.TenantID != playlist.TenantID {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// HandleGetPlaylist returns the playlist and its items for a screen.
|
||||
func HandleGetPlaylist(screens *store.ScreenStore, playlists *store.PlaylistStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
|
|
@ -67,16 +48,6 @@ func HandleAddItem(playlists *store.PlaylistStore, media *store.MediaStore, noti
|
|||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
playlistID := r.PathValue("playlistId")
|
||||
|
||||
// K4: Tenant-Check.
|
||||
playlist, err := playlists.Get(r.Context(), playlistID)
|
||||
if err != nil {
|
||||
http.Error(w, "playlist not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requirePlaylistAccess(w, r, playlist) {
|
||||
return
|
||||
}
|
||||
|
||||
var body struct {
|
||||
MediaAssetID string `json:"media_asset_id"`
|
||||
Type string `json:"type"`
|
||||
|
|
@ -143,16 +114,6 @@ func HandleUpdateItem(playlists *store.PlaylistStore, notifier *mqttnotifier.Not
|
|||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
id := r.PathValue("itemId")
|
||||
|
||||
// K4: Tenant-Check via Playlist des Items.
|
||||
playlist, err := playlists.GetByItemID(r.Context(), id)
|
||||
if err != nil {
|
||||
http.Error(w, "item not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requirePlaylistAccess(w, r, playlist) {
|
||||
return
|
||||
}
|
||||
|
||||
var body struct {
|
||||
Title string `json:"title"`
|
||||
DurationSeconds int `json:"duration_seconds"`
|
||||
|
|
@ -194,16 +155,6 @@ func HandleDeleteItem(playlists *store.PlaylistStore, notifier *mqttnotifier.Not
|
|||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
id := r.PathValue("itemId")
|
||||
|
||||
// K4: Tenant-Check via Playlist des Items.
|
||||
playlist, err := playlists.GetByItemID(r.Context(), id)
|
||||
if err != nil {
|
||||
http.Error(w, "item not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requirePlaylistAccess(w, r, playlist) {
|
||||
return
|
||||
}
|
||||
|
||||
// Resolve slug before delete (item won't exist after).
|
||||
slug, _ := playlists.ScreenSlugByItemID(r.Context(), id)
|
||||
|
||||
|
|
@ -225,16 +176,6 @@ func HandleReorder(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifi
|
|||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
playlistID := r.PathValue("playlistId")
|
||||
|
||||
// K4: Tenant-Check.
|
||||
playlist, err := playlists.Get(r.Context(), playlistID)
|
||||
if err != nil {
|
||||
http.Error(w, "playlist not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requirePlaylistAccess(w, r, playlist) {
|
||||
return
|
||||
}
|
||||
|
||||
var ids []string
|
||||
if err := json.NewDecoder(r.Body).Decode(&ids); err != nil {
|
||||
http.Error(w, "body must be JSON array of item IDs", http.StatusBadRequest)
|
||||
|
|
@ -258,17 +199,6 @@ func HandleReorder(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifi
|
|||
func HandleUpdatePlaylistDuration(playlists *store.PlaylistStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
id := r.PathValue("playlistId")
|
||||
|
||||
// K4: Tenant-Check.
|
||||
playlist, err := playlists.Get(r.Context(), id)
|
||||
if err != nil {
|
||||
http.Error(w, "playlist not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requirePlaylistAccess(w, r, playlist) {
|
||||
return
|
||||
}
|
||||
|
||||
secs, err := strconv.Atoi(strings.TrimSpace(r.FormValue("default_duration_seconds")))
|
||||
if err != nil || secs <= 0 {
|
||||
http.Error(w, "invalid duration", http.StatusBadRequest)
|
||||
|
|
@ -364,7 +294,7 @@ func HandleCreateScreen(tenants *store.TenantStore, screens *store.ScreenStore)
|
|||
|
||||
screen, err := screens.Create(r.Context(), tenant.ID, body.Slug, body.Name, body.Orientation)
|
||||
if err != nil {
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
http.Error(w, "db error: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -15,20 +15,8 @@ import (
|
|||
//
|
||||
// POST /api/v1/screens/register
|
||||
// Body: {"slug":"info10","name":"Info10 Bildschirm","orientation":"landscape"}
|
||||
//
|
||||
// K6: Wenn MORZ_INFOBOARD_REGISTER_SECRET gesetzt ist, muss der Aufrufer
|
||||
// den Header X-Register-Secret: <secret> mitschicken. Ohne gültiges Secret
|
||||
// antwortet der Endpoint mit 403 Forbidden.
|
||||
func HandleRegisterScreen(tenants *store.TenantStore, screens *store.ScreenStore, cfg config.Config) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
// K6: Secret-Prüfung, wenn konfiguriert.
|
||||
if cfg.RegisterSecret != "" {
|
||||
if r.Header.Get("X-Register-Secret") != cfg.RegisterSecret {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
var body struct {
|
||||
Slug string `json:"slug"`
|
||||
Name string `json:"name"`
|
||||
|
|
@ -61,7 +49,7 @@ func HandleRegisterScreen(tenants *store.TenantStore, screens *store.ScreenStore
|
|||
|
||||
screen, err := screens.Upsert(r.Context(), tenant.ID, body.Slug, body.Name, body.Orientation)
|
||||
if err != nil {
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
http.Error(w, "db error: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -33,7 +33,6 @@ const loginTmpl = `<!DOCTYPE html>
|
|||
{{end}}
|
||||
|
||||
<form method="POST" action="/login">
|
||||
<input type="hidden" name="csrf_token" value="{{.CSRFToken}}">
|
||||
{{if .Next}}
|
||||
<input type="hidden" name="next" value="{{.Next}}">
|
||||
{{end}}
|
||||
|
|
@ -261,7 +260,7 @@ const adminTmpl = `<!DOCTYPE html>
|
|||
</div>
|
||||
</nav>
|
||||
|
||||
<!-- Lösch-Bestätigungs-Modal (Screens) -->
|
||||
<!-- Lösch-Bestätigungs-Modal -->
|
||||
<div id="delete-modal" class="modal">
|
||||
<div class="modal-background" onclick="closeDeleteModal()"></div>
|
||||
<div class="modal-card">
|
||||
|
|
@ -282,43 +281,6 @@ const adminTmpl = `<!DOCTYPE html>
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Lösch-Bestätigungs-Modal (User) -->
|
||||
<div id="delete-user-modal" class="modal">
|
||||
<div class="modal-background" onclick="closeDeleteUserModal()"></div>
|
||||
<div class="modal-card">
|
||||
<header class="modal-card-head">
|
||||
<p class="modal-card-title">Benutzer löschen?</p>
|
||||
<button class="delete" aria-label="Schließen" onclick="closeDeleteUserModal()"></button>
|
||||
</header>
|
||||
<section class="modal-card-body">
|
||||
<p>Soll Benutzer <strong id="delete-user-modal-name"></strong> wirklich gelöscht werden?</p>
|
||||
<p class="has-text-grey is-size-7 mt-2">Alle Screen-Zuordnungen werden ebenfalls entfernt.</p>
|
||||
</section>
|
||||
<footer class="modal-card-foot">
|
||||
<form id="delete-user-modal-form" method="POST">
|
||||
<button class="button is-danger" type="submit">Wirklich löschen</button>
|
||||
</form>
|
||||
<button class="button" onclick="closeDeleteUserModal()">Abbrechen</button>
|
||||
</footer>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Screen-User-Verwaltungs-Modal -->
|
||||
<div id="screen-users-modal" class="modal">
|
||||
<div class="modal-background" onclick="closeScreenUsersModal()"></div>
|
||||
<div class="modal-card" style="width:600px;max-width:95vw">
|
||||
<header class="modal-card-head">
|
||||
<p class="modal-card-title" id="screen-users-modal-title">Benutzer verwalten</p>
|
||||
<button class="delete" aria-label="Schließen" onclick="closeScreenUsersModal()"></button>
|
||||
</header>
|
||||
<section class="modal-card-body" id="screen-users-modal-body">
|
||||
</section>
|
||||
<footer class="modal-card-foot">
|
||||
<button class="button" onclick="closeScreenUsersModal()">Schließen</button>
|
||||
</footer>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
(function() {
|
||||
var burger = document.querySelector('.navbar-burger[data-target="adminNavbar"]');
|
||||
|
|
@ -339,33 +301,8 @@ function openDeleteModal(action, name) {
|
|||
function closeDeleteModal() {
|
||||
document.getElementById('delete-modal').classList.remove('is-active');
|
||||
}
|
||||
|
||||
function openDeleteUserModal(action, name) {
|
||||
document.getElementById('delete-user-modal-form').action = action;
|
||||
document.getElementById('delete-user-modal-name').textContent = name;
|
||||
document.getElementById('delete-user-modal').classList.add('is-active');
|
||||
}
|
||||
function closeDeleteUserModal() {
|
||||
document.getElementById('delete-user-modal').classList.remove('is-active');
|
||||
}
|
||||
|
||||
function openScreenUsersModal(screenId, screenName, html) {
|
||||
document.getElementById('screen-users-modal-title').textContent = 'Benutzer: ' + screenName;
|
||||
document.getElementById('screen-users-modal-body').innerHTML = html;
|
||||
document.getElementById('screen-users-modal').classList.add('is-active');
|
||||
// Re-inject CSRF tokens into newly added forms
|
||||
injectCSRFNow();
|
||||
}
|
||||
function closeScreenUsersModal() {
|
||||
document.getElementById('screen-users-modal').classList.remove('is-active');
|
||||
}
|
||||
|
||||
document.addEventListener('keydown', function(e) {
|
||||
if (e.key === 'Escape') {
|
||||
closeDeleteModal();
|
||||
closeDeleteUserModal();
|
||||
closeScreenUsersModal();
|
||||
}
|
||||
if (e.key === 'Escape') closeDeleteModal();
|
||||
});
|
||||
</script>
|
||||
<script>
|
||||
|
|
@ -373,22 +310,14 @@ document.addEventListener('keydown', function(e) {
|
|||
var msg = new URLSearchParams(window.location.search).get('msg');
|
||||
if (!msg) return;
|
||||
var texts = {
|
||||
'uploaded': '✓ Medium erfolgreich hochgeladen.',
|
||||
'deleted': '✓ Erfolgreich gelöscht.',
|
||||
'saved': '✓ Änderungen gespeichert.',
|
||||
'added': '✓ Erfolgreich hinzugefügt.',
|
||||
'user_added': '✓ Benutzer angelegt.',
|
||||
'user_deleted': '✓ Benutzer gelöscht.',
|
||||
'user_added_to_screen': '✓ Benutzer zum Screen hinzugefügt.',
|
||||
'user_removed_from_screen': '✓ Benutzer vom Screen entfernt.',
|
||||
'error_empty': '⚠ Benutzername und Passwort erforderlich.',
|
||||
'error_exists': '⚠ Benutzername bereits vergeben.',
|
||||
'error_db': '⚠ Datenbankfehler.'
|
||||
'uploaded': '✓ Medium erfolgreich hochgeladen.',
|
||||
'deleted': '✓ Erfolgreich gelöscht.',
|
||||
'saved': '✓ Änderungen gespeichert.',
|
||||
'added': '✓ Erfolgreich hinzugefügt.'
|
||||
};
|
||||
var isError = msg.startsWith('error_');
|
||||
var text = texts[msg] || '✓ Aktion erfolgreich.';
|
||||
var n = document.createElement('div');
|
||||
n.className = 'notification ' + (isError ? 'is-warning' : 'is-success');
|
||||
n.className = 'notification is-success';
|
||||
n.style.cssText = 'position:fixed;top:1rem;right:1rem;z-index:9999;max-width:380px;box-shadow:0 4px 12px rgba(0,0,0,.15)';
|
||||
n.innerHTML = '<button class="delete"></button>' + text;
|
||||
n.querySelector('.delete').addEventListener('click', function() { n.remove(); });
|
||||
|
|
@ -398,45 +327,16 @@ document.addEventListener('keydown', function(e) {
|
|||
n.style.opacity = '0';
|
||||
setTimeout(function() { n.remove(); }, 500);
|
||||
}, 3000);
|
||||
// Clean URL without reloading
|
||||
var url = new URL(window.location.href);
|
||||
url.searchParams.delete('msg');
|
||||
history.replaceState(null, '', url.toString());
|
||||
})();
|
||||
</script>
|
||||
|
||||
<section class="section pt-0">
|
||||
<div class="container">
|
||||
|
||||
<!-- Tabs -->
|
||||
<div class="tabs is-boxed mb-0">
|
||||
<ul>
|
||||
<li id="tab-screens" class="{{if eq .ActiveTab "screens"}}is-active{{end}}">
|
||||
<a onclick="switchTab('screens')">Bildschirme</a>
|
||||
</li>
|
||||
<li id="tab-users" class="{{if eq .ActiveTab "users"}}is-active{{end}}">
|
||||
<a onclick="switchTab('users')">Benutzer</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<script>
|
||||
function switchTab(name) {
|
||||
document.querySelectorAll('.tab-panel').forEach(function(p) { p.style.display = 'none'; });
|
||||
document.querySelectorAll('.tabs li').forEach(function(li) { li.classList.remove('is-active'); });
|
||||
document.getElementById('panel-' + name).style.display = '';
|
||||
document.getElementById('tab-' + name).classList.add('is-active');
|
||||
var url = new URL(window.location.href);
|
||||
url.searchParams.set('tab', name);
|
||||
history.replaceState(null, '', url.toString());
|
||||
}
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
var active = '{{.ActiveTab}}';
|
||||
switchTab(active || 'screens');
|
||||
});
|
||||
</script>
|
||||
|
||||
<!-- Panel: Bildschirme -->
|
||||
<div id="panel-screens" class="tab-panel box" style="border-radius:0 4px 4px 4px">
|
||||
|
||||
<div class="box">
|
||||
<h2 class="title is-5">Bildschirme</h2>
|
||||
{{if .Screens}}
|
||||
<div style="overflow-x: auto">
|
||||
|
|
@ -447,27 +347,16 @@ document.addEventListener('keydown', function(e) {
|
|||
<th>Slug</th>
|
||||
<th>Format</th>
|
||||
<th>Status</th>
|
||||
<th>Benutzer</th>
|
||||
<th>Aktionen</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{{range .Screens}}
|
||||
{{$users := index $.ScreenUserMap .ID}}
|
||||
<tr>
|
||||
<td><strong>{{.Name}}</strong></td>
|
||||
<td><code>{{.Slug}}</code></td>
|
||||
<td>{{orientationLabel .Orientation}}</td>
|
||||
<td id="status-{{.Slug}}"><span class="has-text-grey">⚪</span></td>
|
||||
<td>
|
||||
{{$screenID := .ID}}
|
||||
{{$screenName := .Name}}
|
||||
<button class="button is-small is-light"
|
||||
type="button"
|
||||
onclick="openScreenUsersModal('{{$screenID}}', {{$screenName | printf "%q"}}, buildScreenUsersHTML('{{$screenID}}', {{$screenName | printf "%q"}}))">
|
||||
{{len $users}} Benutzer
|
||||
</button>
|
||||
</td>
|
||||
<td>
|
||||
<a class="button is-small is-link" href="/manage/{{.Slug}}">Playlist verwalten</a>
|
||||
|
||||
|
|
@ -484,9 +373,9 @@ document.addEventListener('keydown', function(e) {
|
|||
{{else}}
|
||||
<p class="has-text-grey">Noch keine Bildschirme angelegt.</p>
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<div class="box">
|
||||
<h2 class="title is-5">Neuen Bildschirm einrichten</h2>
|
||||
<p class="mb-4 has-text-grey">
|
||||
Fülle die Angaben aus. Der Bildschirm wird im Backend angelegt und du erhältst
|
||||
|
|
@ -545,9 +434,12 @@ document.addEventListener('keydown', function(e) {
|
|||
</div>
|
||||
<button class="button is-primary" type="submit">Anlegen & Anleitung generieren →</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<details class="mt-4">
|
||||
<summary class="has-text-grey" style="cursor:pointer">Bestehenden Screen manuell anlegen (nur DB-Eintrag, kein Deployment)</summary>
|
||||
<div class="box">
|
||||
<h2 class="title is-5">Bestehenden Screen manuell anlegen</h2>
|
||||
<details>
|
||||
<summary class="has-text-grey" style="cursor:pointer">Nur DB-Eintrag, kein Deployment (aufklappen)</summary>
|
||||
<form method="POST" action="/admin/screens" class="mt-4">
|
||||
<div class="columns is-vcentered">
|
||||
<div class="column is-3">
|
||||
|
|
@ -589,151 +481,11 @@ document.addEventListener('keydown', function(e) {
|
|||
</div>
|
||||
</form>
|
||||
</details>
|
||||
|
||||
</div><!-- /panel-screens -->
|
||||
|
||||
<!-- Panel: Benutzer -->
|
||||
<div id="panel-users" class="tab-panel box" style="border-radius:0 4px 4px 4px">
|
||||
|
||||
<h2 class="title is-5">Screen-Benutzer</h2>
|
||||
<p class="has-text-grey mb-4">Screen-Benutzer können sich einloggen und nur ihre zugeordneten Bildschirme verwalten.</p>
|
||||
|
||||
{{if .ScreenUsers}}
|
||||
<table class="table is-fullwidth is-hoverable is-striped mb-5">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Benutzername</th>
|
||||
<th>Erstellt</th>
|
||||
<th>Aktionen</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{{range .ScreenUsers}}
|
||||
<tr>
|
||||
<td><strong>{{.Username}}</strong></td>
|
||||
<td>{{.CreatedAt.Format "02.01.2006 15:04"}}</td>
|
||||
<td>
|
||||
<button class="button is-small is-danger is-outlined"
|
||||
type="button"
|
||||
onclick="openDeleteUserModal('/admin/users/{{.ID}}/delete', '{{.Username}}')">Löschen</button>
|
||||
</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
{{else}}
|
||||
<p class="has-text-grey mb-4">Noch keine Screen-Benutzer angelegt.</p>
|
||||
{{end}}
|
||||
|
||||
<hr>
|
||||
<h3 class="title is-6">Neuen Benutzer anlegen</h3>
|
||||
<form method="POST" action="/admin/users">
|
||||
<div class="columns is-vcentered">
|
||||
<div class="column is-4">
|
||||
<div class="field">
|
||||
<label class="label">Benutzername</label>
|
||||
<div class="control">
|
||||
<input class="input" type="text" name="username" placeholder="z.B. alice" required
|
||||
autocomplete="off">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="column is-4">
|
||||
<div class="field">
|
||||
<label class="label">Passwort</label>
|
||||
<div class="control">
|
||||
<input class="input" type="password" name="password" placeholder="Passwort" required
|
||||
autocomplete="new-password">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="column is-4">
|
||||
<div class="field">
|
||||
<label class="label"> </label>
|
||||
<button class="button is-primary" type="submit">Benutzer anlegen</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
|
||||
</div><!-- /panel-users -->
|
||||
</div>
|
||||
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- Eingebettete Screen-User-Daten für das Modal (als JSON-Strings) -->
|
||||
<script>
|
||||
var _screenUsers = {{.ScreenUsers | screenUsersJSON}};
|
||||
var _screenUserMap = {{.ScreenUserMap | screenUserMapJSON}};
|
||||
|
||||
function buildScreenUsersHTML(screenId, screenName) {
|
||||
var users = _screenUserMap[screenId] || [];
|
||||
var allUsers = _screenUsers || [];
|
||||
|
||||
// Bereits zugeordnete User-IDs
|
||||
var assignedIds = {};
|
||||
users.forEach(function(u) { assignedIds[u.id] = true; });
|
||||
|
||||
// Tabelle der zugeordneten User
|
||||
var html = '';
|
||||
if (users.length > 0) {
|
||||
html += '<table class="table is-fullwidth is-narrow mb-4"><thead><tr><th>Benutzer</th><th></th></tr></thead><tbody>';
|
||||
users.forEach(function(u) {
|
||||
html += '<tr><td>' + escHtml(u.username) + '</td>';
|
||||
html += '<td><form method="POST" action="/admin/screens/' + escHtml(screenId) + '/users/' + escHtml(u.id) + '/remove" style="display:inline">';
|
||||
html += '<button class="button is-small is-danger is-outlined" type="submit">Entfernen</button></form></td></tr>';
|
||||
});
|
||||
html += '</tbody></table>';
|
||||
} else {
|
||||
html += '<p class="has-text-grey mb-4">Noch keine Benutzer zugeordnet.</p>';
|
||||
}
|
||||
|
||||
// Dropdown mit verfügbaren Usern
|
||||
var available = allUsers.filter(function(u) { return !assignedIds[u.id]; });
|
||||
if (available.length > 0) {
|
||||
html += '<form method="POST" action="/admin/screens/' + escHtml(screenId) + '/users">';
|
||||
html += '<div class="field has-addons">';
|
||||
html += '<div class="control is-expanded"><div class="select is-fullwidth"><select name="user_id">';
|
||||
available.forEach(function(u) {
|
||||
html += '<option value="' + escHtml(u.id) + '">' + escHtml(u.username) + '</option>';
|
||||
});
|
||||
html += '</select></div></div>';
|
||||
html += '<div class="control"><button class="button is-primary" type="submit">Hinzufügen</button></div>';
|
||||
html += '</div></form>';
|
||||
} else if (allUsers.length === 0) {
|
||||
html += '<p class="has-text-grey is-size-7">Lege zuerst Benutzer im Tab "Benutzer" an.</p>';
|
||||
} else {
|
||||
html += '<p class="has-text-grey is-size-7">Alle Benutzer sind bereits zugeordnet.</p>';
|
||||
}
|
||||
|
||||
return html;
|
||||
}
|
||||
|
||||
function escHtml(s) {
|
||||
return String(s)
|
||||
.replace(/&/g, '&')
|
||||
.replace(/</g, '<')
|
||||
.replace(/>/g, '>')
|
||||
.replace(/"/g, '"');
|
||||
}
|
||||
|
||||
function injectCSRFNow() {
|
||||
function getCookie(name) {
|
||||
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||
return m ? decodeURIComponent(m[1]) : '';
|
||||
}
|
||||
var token = getCookie('morz_csrf');
|
||||
if (!token) return;
|
||||
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||
var inp = document.createElement('input');
|
||||
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||
f.appendChild(inp);
|
||||
}
|
||||
});
|
||||
}
|
||||
</script>
|
||||
|
||||
<script>
|
||||
(function() {
|
||||
fetch('/api/v1/screens/status')
|
||||
|
|
@ -751,31 +503,6 @@ function injectCSRFNow() {
|
|||
.catch(function() {});
|
||||
})();
|
||||
</script>
|
||||
<script>
|
||||
// K1: CSRF Double-Submit — füge Token aus Cookie in alle POST-Formulare ein.
|
||||
(function() {
|
||||
function getCookie(name) {
|
||||
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||
return m ? decodeURIComponent(m[1]) : '';
|
||||
}
|
||||
function injectCSRF() {
|
||||
var token = getCookie('morz_csrf');
|
||||
if (!token) return;
|
||||
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||
var inp = document.createElement('input');
|
||||
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||
f.appendChild(inp);
|
||||
}
|
||||
});
|
||||
}
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', injectCSRF);
|
||||
} else {
|
||||
injectCSRF();
|
||||
}
|
||||
})();
|
||||
</script>
|
||||
</body>
|
||||
</html>`
|
||||
|
||||
|
|
@ -1242,31 +969,6 @@ function startUpload() {
|
|||
xhr.send(formData);
|
||||
}
|
||||
</script>
|
||||
<script>
|
||||
// K1: CSRF Double-Submit — füge Token aus Cookie in alle POST-Formulare ein.
|
||||
(function() {
|
||||
function getCookie(name) {
|
||||
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||
return m ? decodeURIComponent(m[1]) : '';
|
||||
}
|
||||
function injectCSRF() {
|
||||
var token = getCookie('morz_csrf');
|
||||
if (!token) return;
|
||||
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||
var inp = document.createElement('input');
|
||||
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||
f.appendChild(inp);
|
||||
}
|
||||
});
|
||||
}
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', injectCSRF);
|
||||
} else {
|
||||
injectCSRF();
|
||||
}
|
||||
})();
|
||||
</script>
|
||||
|
||||
</body>
|
||||
</html>`
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
package manage
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"log/slog"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
|
@ -12,91 +12,12 @@ import (
|
|||
"strings"
|
||||
"time"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/fileutil"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/mqttnotifier"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||
)
|
||||
|
||||
// jsonSafe serializes v to a JSON string safe for inline use in a <script> block.
|
||||
// It returns template.JS so the template engine does not HTML-escape it again.
|
||||
func jsonSafe(v any) template.JS {
|
||||
b, err := json.Marshal(v)
|
||||
if err != nil {
|
||||
return template.JS("null")
|
||||
}
|
||||
return template.JS(b) //nolint:gosec
|
||||
}
|
||||
|
||||
// renderTemplate rendert t mit data in einen Buffer und schreibt das Ergebnis erst
|
||||
// dann in w, wenn kein Fehler aufgetreten ist. W7: Verhindert halb-gerendertes HTML.
|
||||
func renderTemplate(w http.ResponseWriter, t *template.Template, data any) {
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, data); err != nil {
|
||||
http.Error(w, "Interner Fehler (Template)", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
buf.WriteTo(w) //nolint:errcheck
|
||||
}
|
||||
|
||||
// requireScreenAccess prüft, ob der eingeloggte User Zugriff auf den Screen hat.
|
||||
// Admins dürfen alles. Tenant-User dürfen nur Screens ihres eigenen Tenants bearbeiten.
|
||||
// Gibt true zurück wenn Zugriff erlaubt ist; schreibt 403 und gibt false zurück wenn nicht.
|
||||
func requireScreenAccess(w http.ResponseWriter, r *http.Request, screen *store.Screen) bool {
|
||||
u := reqcontext.UserFromContext(r.Context())
|
||||
if u == nil {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return false
|
||||
}
|
||||
if u.Role == "admin" {
|
||||
return true
|
||||
}
|
||||
// Tenant-User: Screen muss zum eigenen Tenant gehören.
|
||||
// Wir vergleichen über TenantSlug→TenantID, aber der Screen hat TenantID.
|
||||
// Da uns der Tenant-Slug des Users bekannt ist und wir keinen TenantStore
|
||||
// hier haben, vergleichen wir TenantID des Screens mit dem user.TenantID-Feld.
|
||||
// store.User hat TenantSlug aber nicht TenantID — deswegen muss der
|
||||
// aufrufende Handler nach GetBySlug bereits die TenantID des Screens bekannt haben.
|
||||
// Wir nutzen u.TenantSlug und vertrauen darauf dass der Screen bereits geladen ist.
|
||||
// Den eigentlichen Vergleich machen wir via TenantID des Screens vs.
|
||||
// dem TenantID-Feld im User (das über reqcontext gespeichert ist).
|
||||
if u.TenantID != "" && u.TenantID != screen.TenantID {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
var tmplFuncs = template.FuncMap{
|
||||
// screenUsersJSON serializes a []*store.User slice to JSON for inline JS.
|
||||
"screenUsersJSON": func(users []*store.User) template.JS {
|
||||
type entry struct {
|
||||
ID string `json:"id"`
|
||||
Username string `json:"username"`
|
||||
}
|
||||
out := make([]entry, 0, len(users))
|
||||
for _, u := range users {
|
||||
out = append(out, entry{ID: u.ID, Username: u.Username})
|
||||
}
|
||||
return jsonSafe(out)
|
||||
},
|
||||
// screenUserMapJSON serializes map[string][]*store.ScreenUserEntry to JSON.
|
||||
"screenUserMapJSON": func(m map[string][]*store.ScreenUserEntry) template.JS {
|
||||
type entry struct {
|
||||
ID string `json:"id"`
|
||||
Username string `json:"username"`
|
||||
}
|
||||
out := map[string][]entry{}
|
||||
for screenID, users := range m {
|
||||
entries := make([]entry, 0, len(users))
|
||||
for _, u := range users {
|
||||
entries = append(entries, entry{ID: u.ID, Username: u.Username})
|
||||
}
|
||||
out[screenID] = entries
|
||||
}
|
||||
return jsonSafe(out)
|
||||
},
|
||||
"typeIcon": func(t string) string {
|
||||
switch t {
|
||||
case "image":
|
||||
|
|
@ -131,8 +52,8 @@ var tmplFuncs = template.FuncMap{
|
|||
},
|
||||
}
|
||||
|
||||
// HandleAdminUI renders the admin overview page (screens + users tabs).
|
||||
func HandleAdminUI(tenants *store.TenantStore, screens *store.ScreenStore, auth *store.AuthStore) http.HandlerFunc {
|
||||
// HandleAdminUI renders the admin overview page.
|
||||
func HandleAdminUI(tenants *store.TenantStore, screens *store.ScreenStore) http.HandlerFunc {
|
||||
t := template.Must(template.New("admin").Funcs(tmplFuncs).Parse(adminTmpl))
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
allScreens, err := screens.ListAll(r.Context())
|
||||
|
|
@ -145,125 +66,14 @@ func HandleAdminUI(tenants *store.TenantStore, screens *store.ScreenStore, auth
|
|||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Default tenant slug for user management.
|
||||
tenantSlug := "morz"
|
||||
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
||||
tenantSlug = u.TenantSlug
|
||||
}
|
||||
screenUsers, err := auth.ListScreenUsers(r.Context(), tenantSlug)
|
||||
if err != nil {
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Build per-screen user lists for the modal.
|
||||
screenUserMap := map[string][]*store.ScreenUserEntry{}
|
||||
for _, sc := range allScreens {
|
||||
users, err := screens.GetScreenUsers(r.Context(), sc.ID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
screenUserMap[sc.ID] = users
|
||||
}
|
||||
|
||||
activeTab := r.URL.Query().Get("tab")
|
||||
if activeTab == "" {
|
||||
activeTab = "screens"
|
||||
}
|
||||
|
||||
renderTemplate(w, t, map[string]any{
|
||||
"Screens": allScreens,
|
||||
"Tenants": allTenants,
|
||||
"ScreenUsers": screenUsers,
|
||||
"ScreenUserMap": screenUserMap,
|
||||
"ActiveTab": activeTab,
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
||||
"Screens": allScreens,
|
||||
"Tenants": allTenants,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// HandleCreateScreenUser creates a new screen_user for the default tenant.
|
||||
func HandleCreateScreenUser(auth *store.AuthStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad form", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
username := strings.TrimSpace(r.FormValue("username"))
|
||||
password := r.FormValue("password")
|
||||
if username == "" || password == "" {
|
||||
http.Redirect(w, r, "/admin?tab=users&msg=error_empty", http.StatusSeeOther)
|
||||
return
|
||||
}
|
||||
|
||||
tenantSlug := "morz"
|
||||
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
||||
tenantSlug = u.TenantSlug
|
||||
}
|
||||
|
||||
_, err := auth.CreateScreenUser(r.Context(), tenantSlug, username, password)
|
||||
if err != nil {
|
||||
slog.Error("create screen user failed", "event", "create_screen_user_failed",
|
||||
"tenant_slug", tenantSlug, "username", username, "err", err)
|
||||
http.Redirect(w, r, "/admin?tab=users&msg=error_exists", http.StatusSeeOther)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/admin?tab=users&msg=user_added", http.StatusSeeOther)
|
||||
}
|
||||
}
|
||||
|
||||
// HandleDeleteScreenUser deletes a screen_user by ID.
|
||||
func HandleDeleteScreenUser(auth *store.AuthStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
userID := r.PathValue("userID")
|
||||
if err := auth.DeleteUser(r.Context(), userID); err != nil {
|
||||
slog.Error("delete screen user failed", "event", "delete_screen_user_failed",
|
||||
"user_id", userID, "err", err)
|
||||
http.Error(w, "Fehler beim Löschen", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/admin?tab=users&msg=user_deleted", http.StatusSeeOther)
|
||||
}
|
||||
}
|
||||
|
||||
// HandleAddUserToScreen grants a user access to a specific screen.
|
||||
func HandleAddUserToScreen(screens *store.ScreenStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
screenID := r.PathValue("screenID")
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad form", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
userID := strings.TrimSpace(r.FormValue("user_id"))
|
||||
if userID == "" {
|
||||
http.Redirect(w, r, "/admin?msg=error_empty", http.StatusSeeOther)
|
||||
return
|
||||
}
|
||||
if err := screens.AddUserToScreen(r.Context(), userID, screenID); err != nil {
|
||||
slog.Error("add user to screen failed", "event", "add_user_to_screen_failed",
|
||||
"screen_id", screenID, "user_id", userID, "err", err)
|
||||
http.Redirect(w, r, "/admin?msg=error_db", http.StatusSeeOther)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/admin?screen="+screenID+"&msg=user_added_to_screen", http.StatusSeeOther)
|
||||
}
|
||||
}
|
||||
|
||||
// HandleRemoveUserFromScreen removes a user's access to a specific screen.
|
||||
func HandleRemoveUserFromScreen(screens *store.ScreenStore) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
screenID := r.PathValue("screenID")
|
||||
userID := r.PathValue("userID")
|
||||
if err := screens.RemoveUserFromScreen(r.Context(), userID, screenID); err != nil {
|
||||
slog.Error("remove user from screen failed", "event", "remove_user_from_screen_failed",
|
||||
"screen_id", screenID, "user_id", userID, "err", err)
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/admin?screen="+screenID+"&msg=user_removed_from_screen", http.StatusSeeOther)
|
||||
}
|
||||
}
|
||||
|
||||
// HandleManageUI renders the playlist management UI for a specific screen.
|
||||
func HandleManageUI(
|
||||
tenants *store.TenantStore,
|
||||
|
|
@ -281,11 +91,6 @@ func HandleManageUI(
|
|||
return
|
||||
}
|
||||
|
||||
// K2: Tenant-Isolation — nur eigener Tenant oder Admin.
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
|
||||
var tenant *store.Tenant
|
||||
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
||||
tenant, _ = tenants.Get(r.Context(), u.TenantSlug)
|
||||
|
|
@ -334,7 +139,8 @@ func HandleManageUI(
|
|||
}
|
||||
}
|
||||
|
||||
renderTemplate(w, t, map[string]any{
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
||||
"Screen": screen,
|
||||
"Tenant": tenant,
|
||||
"Playlist": playlist,
|
||||
|
|
@ -377,7 +183,7 @@ func HandleCreateScreenUI(tenants *store.TenantStore, screens *store.ScreenStore
|
|||
|
||||
_, err = screens.Create(r.Context(), tenant.ID, slug, name, orientation)
|
||||
if err != nil {
|
||||
http.Error(w, "Interner Fehler", http.StatusInternalServerError)
|
||||
http.Error(w, "Fehler: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/admin?msg=added", http.StatusSeeOther)
|
||||
|
|
@ -424,11 +230,12 @@ func HandleProvisionUI(tenants *store.TenantStore, screens *store.ScreenStore) h
|
|||
|
||||
screen, err := screens.Upsert(r.Context(), tenant.ID, slug, name, orientation)
|
||||
if err != nil {
|
||||
http.Error(w, "Interner Fehler", http.StatusInternalServerError)
|
||||
http.Error(w, "DB-Fehler: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
renderTemplate(w, t, map[string]any{
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
||||
"Screen": screen,
|
||||
"IP": ip,
|
||||
"SSHUser": sshUser,
|
||||
|
|
@ -460,14 +267,6 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
|||
return
|
||||
}
|
||||
|
||||
// K2: Tenant-Isolation.
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
|
||||
// W3: MaxBytesReader begrenzt Uploads auf maxUploadSize.
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxUploadSize)
|
||||
|
||||
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
||||
http.Error(w, "Upload zu groß oder ungültig", http.StatusBadRequest)
|
||||
return
|
||||
|
|
@ -476,15 +275,6 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
|||
assetType := strings.TrimSpace(r.FormValue("type"))
|
||||
title := strings.TrimSpace(r.FormValue("title"))
|
||||
|
||||
// Bestimme tenantSlug für N6 (tenant-spezifisches Upload-Verzeichnis).
|
||||
tenantSlug := ""
|
||||
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
||||
tenantSlug = u.TenantSlug
|
||||
}
|
||||
if tenantSlug == "" {
|
||||
tenantSlug = "default"
|
||||
}
|
||||
|
||||
switch assetType {
|
||||
case "web":
|
||||
url := strings.TrimSpace(r.FormValue("url"))
|
||||
|
|
@ -507,12 +297,17 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
|||
title = strings.TrimSuffix(header.Filename, filepath.Ext(header.Filename))
|
||||
}
|
||||
mimeType := header.Header.Get("Content-Type")
|
||||
// V1+N6: Gemeinsame Upload-Funktion, tenant-spezifisches Verzeichnis.
|
||||
storagePath, size, ferr := fileutil.SaveUploadedFile(file, header.Filename, title, uploadDir, tenantSlug)
|
||||
ext := filepath.Ext(header.Filename)
|
||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), sanitize(title), ext)
|
||||
destPath := filepath.Join(uploadDir, filename)
|
||||
dest, ferr := os.Create(destPath)
|
||||
if ferr != nil {
|
||||
http.Error(w, "Speicherfehler", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer dest.Close()
|
||||
size, _ := io.Copy(dest, file)
|
||||
storagePath := "/uploads/" + filename
|
||||
_, err = media.Create(r.Context(), screen.TenantID, title, assetType, storagePath, "", mimeType, size)
|
||||
default:
|
||||
http.Error(w, "Unbekannter Typ", http.StatusBadRequest)
|
||||
|
|
@ -520,7 +315,7 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
|||
}
|
||||
|
||||
if err != nil {
|
||||
http.Error(w, "DB-Fehler", http.StatusInternalServerError)
|
||||
http.Error(w, "DB-Fehler: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/manage/"+screenSlug+"?msg=uploaded", http.StatusSeeOther)
|
||||
|
|
@ -542,11 +337,6 @@ func HandleAddItemUI(playlists *store.PlaylistStore, media *store.MediaStore, sc
|
|||
return
|
||||
}
|
||||
|
||||
// K2: Tenant-Isolation.
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
|
||||
playlist, err := playlists.GetOrCreateForScreen(r.Context(), screen.TenantID, screen.ID, screen.Name)
|
||||
if err != nil {
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
|
|
@ -598,21 +388,10 @@ func HandleAddItemUI(playlists *store.PlaylistStore, media *store.MediaStore, sc
|
|||
}
|
||||
|
||||
// HandleDeleteItemUI removes a playlist item and redirects back.
|
||||
func HandleDeleteItemUI(playlists *store.PlaylistStore, screens *store.ScreenStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
||||
func HandleDeleteItemUI(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
screenSlug := r.PathValue("screenSlug")
|
||||
itemID := r.PathValue("itemId")
|
||||
|
||||
// K2: Tenant-Isolation.
|
||||
screen, err := screens.GetBySlug(r.Context(), screenSlug)
|
||||
if err != nil {
|
||||
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
|
||||
if err := playlists.DeleteItem(r.Context(), itemID); err != nil {
|
||||
http.Error(w, "db error", http.StatusInternalServerError)
|
||||
return
|
||||
|
|
@ -631,10 +410,6 @@ func HandleReorderUI(playlists *store.PlaylistStore, screens *store.ScreenStore,
|
|||
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
// K2: Tenant-Isolation.
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
playlist, err := playlists.GetByScreen(r.Context(), screen.ID)
|
||||
if err != nil {
|
||||
http.Error(w, "playlist nicht gefunden", http.StatusNotFound)
|
||||
|
|
@ -655,21 +430,10 @@ func HandleReorderUI(playlists *store.PlaylistStore, screens *store.ScreenStore,
|
|||
}
|
||||
|
||||
// HandleUpdateItemUI handles form PATCH/POST to update a single item.
|
||||
func HandleUpdateItemUI(playlists *store.PlaylistStore, screens *store.ScreenStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
||||
func HandleUpdateItemUI(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
screenSlug := r.PathValue("screenSlug")
|
||||
itemID := r.PathValue("itemId")
|
||||
|
||||
// K2: Tenant-Isolation.
|
||||
screen, err := screens.GetBySlug(r.Context(), screenSlug)
|
||||
if err != nil {
|
||||
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad form", http.StatusBadRequest)
|
||||
return
|
||||
|
|
@ -698,16 +462,6 @@ func HandleDeleteMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
|||
screenSlug := r.PathValue("screenSlug")
|
||||
mediaID := r.PathValue("mediaId")
|
||||
|
||||
// K2: Tenant-Isolation.
|
||||
screen, err := screens.GetBySlug(r.Context(), screenSlug)
|
||||
if err != nil {
|
||||
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if !requireScreenAccess(w, r, screen) {
|
||||
return
|
||||
}
|
||||
|
||||
asset, err := media.Get(r.Context(), mediaID)
|
||||
if err == nil && asset.StoragePath != "" {
|
||||
os.Remove(filepath.Join(uploadDir, filepath.Base(asset.StoragePath))) //nolint:errcheck
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ func UserFromContext(ctx context.Context) *store.User {
|
|||
func RequireAuth(authStore *store.AuthStore) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
cookie, err := r.Cookie(reqcontext.SessionCookieName)
|
||||
cookie, err := r.Cookie("morz_session")
|
||||
if err != nil {
|
||||
redirectToLogin(w, r)
|
||||
return
|
||||
|
|
@ -72,10 +72,7 @@ func RequireTenantAccess(next http.Handler) http.Handler {
|
|||
return
|
||||
}
|
||||
tenantSlug := r.PathValue("tenantSlug")
|
||||
// An empty tenantSlug means the route was registered without a
|
||||
// {tenantSlug} parameter — that is a configuration error. Deny
|
||||
// access rather than silently granting it to every logged-in user.
|
||||
if tenantSlug == "" || user.TenantSlug != tenantSlug {
|
||||
if tenantSlug != "" && user.TenantSlug != tenantSlug {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
|
@ -83,48 +80,6 @@ func RequireTenantAccess(next http.Handler) http.Handler {
|
|||
})
|
||||
}
|
||||
|
||||
// RequireScreenAccess returns middleware that enforces per-screen access control.
|
||||
// Admins bypass the check. Screen-Users must have an explicit entry in
|
||||
// user_screen_permissions for the screen identified by the {screenSlug} path
|
||||
// value. The screenStore is used to look up the screen and check permissions.
|
||||
// Must be chained after RequireAuth.
|
||||
func RequireScreenAccess(screenStore *store.ScreenStore) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
user := UserFromContext(r.Context())
|
||||
if user == nil {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
// Admins always have access.
|
||||
if user.Role == "admin" {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
screenSlug := r.PathValue("screenSlug")
|
||||
if screenSlug == "" {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
screen, err := screenStore.GetBySlug(r.Context(), screenSlug)
|
||||
if err != nil {
|
||||
http.Error(w, "Screen nicht gefunden", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
ok, err := screenStore.HasUserScreenAccess(r.Context(), user.ID, screen.ID)
|
||||
if err != nil || !ok {
|
||||
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// chain applies a list of middleware to a handler, wrapping outermost first.
|
||||
// chain(m1, m2, m3)(h) == m1(m2(m3(h)))
|
||||
func chain(h http.Handler, middlewares ...func(http.Handler) http.Handler) http.Handler {
|
||||
|
|
|
|||
|
|
@ -1,91 +0,0 @@
|
|||
package httpapi
|
||||
|
||||
// ratelimit.go — Einfaches In-Memory-Rate-Limiting für POST /login (N1).
|
||||
//
|
||||
// Implementierung: Sliding-Window-Counter pro IP-Adresse.
|
||||
// Erlaubt maximal loginMaxAttempts Versuche pro loginWindow.
|
||||
// Ältere Einträge werden periodisch aus der Map bereinigt.
|
||||
|
||||
import (
|
||||
"net"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
loginMaxAttempts = 5
|
||||
loginWindow = 1 * time.Minute
|
||||
cleanupInterval = 5 * time.Minute
|
||||
)
|
||||
|
||||
type loginAttempt struct {
|
||||
count int
|
||||
windowEnd time.Time
|
||||
}
|
||||
|
||||
type loginRateLimiter struct {
|
||||
mu sync.Mutex
|
||||
entries map[string]*loginAttempt
|
||||
}
|
||||
|
||||
func newLoginRateLimiter() *loginRateLimiter {
|
||||
rl := &loginRateLimiter{
|
||||
entries: make(map[string]*loginAttempt),
|
||||
}
|
||||
go rl.cleanup()
|
||||
return rl
|
||||
}
|
||||
|
||||
// Allow returns true if the IP is within the rate limit, false if it should be blocked.
|
||||
func (rl *loginRateLimiter) Allow(ip string) bool {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
e, ok := rl.entries[ip]
|
||||
if !ok || now.After(e.windowEnd) {
|
||||
// Neues Fenster.
|
||||
rl.entries[ip] = &loginAttempt{count: 1, windowEnd: now.Add(loginWindow)}
|
||||
return true
|
||||
}
|
||||
e.count++
|
||||
return e.count <= loginMaxAttempts
|
||||
}
|
||||
|
||||
// cleanup bereinigt abgelaufene Einträge periodisch.
|
||||
func (rl *loginRateLimiter) cleanup() {
|
||||
ticker := time.NewTicker(cleanupInterval)
|
||||
defer ticker.Stop()
|
||||
for range ticker.C {
|
||||
rl.mu.Lock()
|
||||
now := time.Now()
|
||||
for ip, e := range rl.entries {
|
||||
if now.After(e.windowEnd) {
|
||||
delete(rl.entries, ip)
|
||||
}
|
||||
}
|
||||
rl.mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// LoginRateLimit ist eine globale Instanz des Rate-Limiters (package-level Singleton).
|
||||
var LoginRateLimit = newLoginRateLimiter()
|
||||
|
||||
// RateLimitLogin ist Middleware, die Brute-Force-Angriffe auf den Login-Endpoint verhindert.
|
||||
// Bei Überschreitung wird 429 Too Many Requests zurückgegeben.
|
||||
func RateLimitLogin(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// IP-Adresse extrahieren (berücksichtigt X-Forwarded-For nicht, um Spoofing zu vermeiden).
|
||||
ip, _, err := net.SplitHostPort(r.RemoteAddr)
|
||||
if err != nil {
|
||||
ip = r.RemoteAddr
|
||||
}
|
||||
|
||||
if !LoginRateLimit.Allow(ip) {
|
||||
http.Error(w, "Zu viele Anmeldeversuche. Bitte warte eine Minute.", http.StatusTooManyRequests)
|
||||
return
|
||||
}
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
|
@ -85,53 +85,32 @@ func registerManageRoutes(mux *http.ServeMux, d RouterDeps) {
|
|||
notifier = mqttnotifier.New("", "", "")
|
||||
}
|
||||
|
||||
// Serve uploaded files. N5: Directory-Listing deaktiviert via neuteredFileSystem.
|
||||
mux.Handle("GET /uploads/", http.StripPrefix("/uploads/", http.FileServer(neuteredFileSystem{http.Dir(uploadDir)})))
|
||||
// Serve uploaded files.
|
||||
mux.Handle("GET /uploads/", http.StripPrefix("/uploads/", http.FileServer(http.Dir(uploadDir))))
|
||||
|
||||
// Serve embedded static assets (Bulma CSS, SortableJS) — no external CDN needed.
|
||||
mux.HandleFunc("GET /static/bulma.min.css", manage.HandleStaticBulmaCSS())
|
||||
mux.HandleFunc("GET /static/Sortable.min.js", manage.HandleStaticSortableJS())
|
||||
|
||||
// K1: CSRF-Schutz für alle state-ändernden Routen.
|
||||
csrf := CSRFProtect(d.Config.DevMode)
|
||||
|
||||
// K1: Setzt den CSRF-Cookie bei GET-Requests, damit das JS-Inject-Script ihn lesen kann.
|
||||
setCSRF := func(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method == http.MethodGet {
|
||||
SetCSRFCookie(w, r, d.Config.DevMode)
|
||||
}
|
||||
h.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
// ── Auth (no auth middleware required) ────────────────────────────────
|
||||
// K1: GET /login setzt CSRF-Cookie; POST /login und POST /logout werden per CSRF geprüft.
|
||||
mux.Handle("GET /login", http.HandlerFunc(manage.HandleLoginUI(d.AuthStore, d.ScreenStore, d.Config)))
|
||||
// N1: Rate-Limiting auf /login (max. 5 Versuche/Minute pro IP).
|
||||
mux.Handle("POST /login", RateLimitLogin(csrf(http.HandlerFunc(manage.HandleLoginPost(d.AuthStore, d.ScreenStore, d.Config)))))
|
||||
mux.Handle("POST /logout", csrf(http.HandlerFunc(manage.HandleLogoutPost(d.AuthStore, d.Config))))
|
||||
mux.HandleFunc("GET /login", manage.HandleLoginUI(d.AuthStore))
|
||||
mux.HandleFunc("POST /login", manage.HandleLoginPost(d.AuthStore, d.Config))
|
||||
mux.HandleFunc("POST /logout", manage.HandleLogoutPost(d.AuthStore))
|
||||
|
||||
// Shorthand middleware combinators for this router.
|
||||
// Für GET-Routen: setCSRF setzt den Cookie; für POST-Routen: csrf validiert ihn.
|
||||
authOnly := func(h http.Handler) http.Handler {
|
||||
return chain(h, RequireAuth(d.AuthStore), setCSRF, csrf)
|
||||
return chain(h, RequireAuth(d.AuthStore))
|
||||
}
|
||||
authAdmin := func(h http.Handler) http.Handler {
|
||||
return chain(h, RequireAuth(d.AuthStore), RequireAdmin, setCSRF, csrf)
|
||||
return chain(h, RequireAuth(d.AuthStore), RequireAdmin)
|
||||
}
|
||||
authTenant := func(h http.Handler) http.Handler {
|
||||
return chain(h, RequireAuth(d.AuthStore), RequireTenantAccess, setCSRF, csrf)
|
||||
}
|
||||
// authScreen: wie authOnly, aber zusätzlich Screen-Zugriffsprüfung für screen_user.
|
||||
// Admins und Tenant-User werden von RequireScreenAccess durchgelassen.
|
||||
authScreen := func(h http.Handler) http.Handler {
|
||||
return chain(h, RequireAuth(d.AuthStore), RequireScreenAccess(d.ScreenStore), setCSRF, csrf)
|
||||
return chain(h, RequireAuth(d.AuthStore), RequireTenantAccess)
|
||||
}
|
||||
|
||||
// ── Admin UI ──────────────────────────────────────────────────────────
|
||||
mux.Handle("GET /admin",
|
||||
authAdmin(http.HandlerFunc(manage.HandleAdminUI(d.TenantStore, d.ScreenStore, d.AuthStore))))
|
||||
authAdmin(http.HandlerFunc(manage.HandleAdminUI(d.TenantStore, d.ScreenStore))))
|
||||
mux.Handle("POST /admin/screens/provision",
|
||||
authAdmin(http.HandlerFunc(manage.HandleProvisionUI(d.TenantStore, d.ScreenStore))))
|
||||
mux.Handle("POST /admin/screens",
|
||||
|
|
@ -139,32 +118,21 @@ func registerManageRoutes(mux *http.ServeMux, d RouterDeps) {
|
|||
mux.Handle("POST /admin/screens/{screenId}/delete",
|
||||
authAdmin(http.HandlerFunc(manage.HandleDeleteScreenUI(d.ScreenStore))))
|
||||
|
||||
// ── Screen-User-Verwaltung (nur Admin) ────────────────────────────────
|
||||
mux.Handle("POST /admin/users",
|
||||
authAdmin(http.HandlerFunc(manage.HandleCreateScreenUser(d.AuthStore))))
|
||||
mux.Handle("POST /admin/users/{userID}/delete",
|
||||
authAdmin(http.HandlerFunc(manage.HandleDeleteScreenUser(d.AuthStore))))
|
||||
mux.Handle("POST /admin/screens/{screenID}/users",
|
||||
authAdmin(http.HandlerFunc(manage.HandleAddUserToScreen(d.ScreenStore))))
|
||||
mux.Handle("POST /admin/screens/{screenID}/users/{userID}/remove",
|
||||
authAdmin(http.HandlerFunc(manage.HandleRemoveUserFromScreen(d.ScreenStore))))
|
||||
|
||||
// ── Playlist management UI ────────────────────────────────────────────
|
||||
// authScreen enforces that screen_user only accesses their permitted screens.
|
||||
mux.Handle("GET /manage/{screenSlug}",
|
||||
authScreen(http.HandlerFunc(manage.HandleManageUI(d.TenantStore, d.ScreenStore, d.MediaStore, d.PlaylistStore))))
|
||||
authOnly(http.HandlerFunc(manage.HandleManageUI(d.TenantStore, d.ScreenStore, d.MediaStore, d.PlaylistStore))))
|
||||
mux.Handle("POST /manage/{screenSlug}/upload",
|
||||
authScreen(http.HandlerFunc(manage.HandleUploadMediaUI(d.MediaStore, d.ScreenStore, uploadDir))))
|
||||
authOnly(http.HandlerFunc(manage.HandleUploadMediaUI(d.MediaStore, d.ScreenStore, uploadDir))))
|
||||
mux.Handle("POST /manage/{screenSlug}/items",
|
||||
authScreen(http.HandlerFunc(manage.HandleAddItemUI(d.PlaylistStore, d.MediaStore, d.ScreenStore, notifier))))
|
||||
authOnly(http.HandlerFunc(manage.HandleAddItemUI(d.PlaylistStore, d.MediaStore, d.ScreenStore, notifier))))
|
||||
mux.Handle("POST /manage/{screenSlug}/items/{itemId}",
|
||||
authScreen(http.HandlerFunc(manage.HandleUpdateItemUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||
authOnly(http.HandlerFunc(manage.HandleUpdateItemUI(d.PlaylistStore, notifier))))
|
||||
mux.Handle("POST /manage/{screenSlug}/items/{itemId}/delete",
|
||||
authScreen(http.HandlerFunc(manage.HandleDeleteItemUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||
authOnly(http.HandlerFunc(manage.HandleDeleteItemUI(d.PlaylistStore, notifier))))
|
||||
mux.Handle("POST /manage/{screenSlug}/reorder",
|
||||
authScreen(http.HandlerFunc(manage.HandleReorderUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||
authOnly(http.HandlerFunc(manage.HandleReorderUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||
mux.Handle("POST /manage/{screenSlug}/media/{mediaId}/delete",
|
||||
authScreen(http.HandlerFunc(manage.HandleDeleteMediaUI(d.MediaStore, d.ScreenStore, uploadDir, notifier))))
|
||||
authOnly(http.HandlerFunc(manage.HandleDeleteMediaUI(d.MediaStore, d.ScreenStore, uploadDir, notifier))))
|
||||
|
||||
// ── JSON API — screens ────────────────────────────────────────────────
|
||||
// Self-registration: no auth (player calls this on startup).
|
||||
|
|
|
|||
|
|
@ -294,31 +294,6 @@ function toggleUploadFields() {
|
|||
setInterval(pollStatus, 30000);
|
||||
})();
|
||||
</script>
|
||||
<script>
|
||||
// K1: CSRF Double-Submit — füge Token aus Cookie in alle POST-Formulare ein.
|
||||
(function() {
|
||||
function getCookie(name) {
|
||||
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||
return m ? decodeURIComponent(m[1]) : '';
|
||||
}
|
||||
function injectCSRF() {
|
||||
var token = getCookie('morz_csrf');
|
||||
if (!token) return;
|
||||
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||
var inp = document.createElement('input');
|
||||
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||
f.appendChild(inp);
|
||||
}
|
||||
});
|
||||
}
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', injectCSRF);
|
||||
} else {
|
||||
injectCSRF();
|
||||
}
|
||||
})();
|
||||
</script>
|
||||
|
||||
</body>
|
||||
</html>`
|
||||
|
|
|
|||
|
|
@ -2,15 +2,15 @@
|
|||
package tenant
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/fileutil"
|
||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||
)
|
||||
|
||||
|
|
@ -94,19 +94,13 @@ func HandleTenantDashboard(
|
|||
}
|
||||
}
|
||||
|
||||
// W7: Template in Buffer rendern, erst bei Erfolg an Client senden.
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, map[string]any{
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
||||
"Tenant": tenant,
|
||||
"Screens": screens,
|
||||
"Assets": assets,
|
||||
"Flash": flash,
|
||||
}); err != nil {
|
||||
http.Error(w, "Interner Fehler (Template)", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
buf.WriteTo(w) //nolint:errcheck
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -126,9 +120,6 @@ func HandleTenantUpload(
|
|||
return
|
||||
}
|
||||
|
||||
// W3: MaxBytesReader begrenzt Uploads auf maxUploadSize bevor ParseMultipartForm.
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxUploadSize)
|
||||
|
||||
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
||||
http.Error(w, "Upload zu groß oder ungültig", http.StatusBadRequest)
|
||||
return
|
||||
|
|
@ -165,12 +156,24 @@ func HandleTenantUpload(
|
|||
if detected := mimeToAssetType(mimeType); detected != "" {
|
||||
assetType = detected
|
||||
}
|
||||
// V1+N6: tenant-spezifisches Upload-Verzeichnis.
|
||||
storagePath, size, cerr := fileutil.SaveUploadedFile(file, header.Filename, title, uploadDir, tenantSlug)
|
||||
if cerr != nil {
|
||||
ext := filepath.Ext(header.Filename)
|
||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), sanitize(title), ext)
|
||||
destPath := filepath.Join(uploadDir, filename)
|
||||
|
||||
dest, ferr := os.Create(destPath)
|
||||
if ferr != nil {
|
||||
http.Error(w, "Speicherfehler", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
size, cerr := io.Copy(dest, file)
|
||||
if cerr != nil {
|
||||
os.Remove(destPath) //nolint:errcheck
|
||||
http.Error(w, "Schreibfehler", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
storagePath := "/uploads/" + filename
|
||||
_, err = mediaStore.Create(r.Context(), tenant.ID, title, assetType, storagePath, "", mimeType, size)
|
||||
|
||||
default:
|
||||
|
|
@ -179,7 +182,7 @@ func HandleTenantUpload(
|
|||
}
|
||||
|
||||
if err != nil {
|
||||
http.Error(w, "Interner Fehler", http.StatusInternalServerError)
|
||||
http.Error(w, "DB-Fehler: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
http.Redirect(w, r, "/tenant/"+tenantSlug+"/dashboard?tab=media&flash=uploaded", http.StatusSeeOther)
|
||||
|
|
|
|||
|
|
@ -1,32 +0,0 @@
|
|||
package httpapi
|
||||
|
||||
// uploads.go — Hilfsmittel für sicheres Serving von Uploads (N5, N6).
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"os"
|
||||
)
|
||||
|
||||
// neuteredFileSystem wraps an http.FileSystem and disables directory listing (N5).
|
||||
// When Open() returns a directory, it returns an error as if the file was not found.
|
||||
type neuteredFileSystem struct {
|
||||
fs http.FileSystem
|
||||
}
|
||||
|
||||
func (nfs neuteredFileSystem) Open(path string) (http.File, error) {
|
||||
f, err := nfs.fs.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s, err := f.Stat()
|
||||
if err != nil {
|
||||
f.Close() //nolint:errcheck
|
||||
return nil, err
|
||||
}
|
||||
if s.IsDir() {
|
||||
// Return os.ErrNotExist so http.FileServer responds with 404.
|
||||
f.Close() //nolint:errcheck
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
|
@ -10,11 +10,6 @@ import (
|
|||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||
)
|
||||
|
||||
// SessionCookieName ist der HTTP-Cookie-Name für Sitzungen.
|
||||
// Er wird in middleware.go (RequireAuth) und manage/auth.go (Login/Logout)
|
||||
// verwendet und hier zentral definiert, um Duplizierung zu vermeiden.
|
||||
const SessionCookieName = "morz_session"
|
||||
|
||||
type contextKey int
|
||||
|
||||
const contextKeyUser contextKey = 0
|
||||
|
|
|
|||
|
|
@ -112,17 +112,11 @@ func (s *AuthStore) VerifyPassword(ctx context.Context, userID, password string)
|
|||
// if no user with username 'admin' already exists. The password is hashed with bcrypt.
|
||||
// bcrypt cost factor 12 is used (minimum recommended for production).
|
||||
func (s *AuthStore) EnsureAdminUser(ctx context.Context, tenantSlug, password string) error {
|
||||
// Check whether 'admin' user already exists for this specific tenant.
|
||||
// The check must be scoped to the tenant to avoid false positives when
|
||||
// another tenant already has an 'admin' user.
|
||||
// Check whether 'admin' user already exists for this tenant.
|
||||
var exists bool
|
||||
err := s.pool.QueryRow(ctx,
|
||||
`select exists(
|
||||
select 1 from users u
|
||||
join tenants t on t.id = u.tenant_id
|
||||
where u.username = $1 and t.slug = $2
|
||||
)`,
|
||||
"admin", tenantSlug,
|
||||
`select exists(select 1 from users where username = $1)`,
|
||||
"admin",
|
||||
).Scan(&exists)
|
||||
if err != nil {
|
||||
return fmt.Errorf("auth: check admin user: %w", err)
|
||||
|
|
@ -136,97 +130,39 @@ func (s *AuthStore) EnsureAdminUser(ctx context.Context, tenantSlug, password st
|
|||
return fmt.Errorf("auth: hash password: %w", err)
|
||||
}
|
||||
|
||||
var tenantID string
|
||||
err = s.pool.QueryRow(ctx, `select id from tenants where slug = $1`, tenantSlug).Scan(&tenantID)
|
||||
if err != nil {
|
||||
if err == pgx.ErrNoRows {
|
||||
return fmt.Errorf("auth: tenant not found: %s", tenantSlug)
|
||||
}
|
||||
return fmt.Errorf("auth: resolve tenant: %w", err)
|
||||
}
|
||||
|
||||
_, err = s.pool.Exec(ctx,
|
||||
`insert into users(tenant_id, username, password_hash, role)
|
||||
values($1, 'admin', $2, 'admin')`,
|
||||
tenantID, string(hash))
|
||||
values(
|
||||
(select id from tenants where slug = $1),
|
||||
'admin',
|
||||
$2,
|
||||
'admin'
|
||||
)`,
|
||||
tenantSlug, string(hash))
|
||||
if err != nil {
|
||||
return fmt.Errorf("auth: create admin user: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateScreenUser creates a new user with role 'screen_user' for the tenant
|
||||
// identified by tenantSlug. The password is hashed with bcrypt (cost 12).
|
||||
// Returns pgx.ErrNoRows if the tenant does not exist, or a wrapped error if
|
||||
// the username is already taken (unique constraint violation).
|
||||
func (s *AuthStore) CreateScreenUser(ctx context.Context, tenantSlug, username, password string) (*User, error) {
|
||||
var tenantID string
|
||||
err := s.pool.QueryRow(ctx, `select id from tenants where slug = $1`, tenantSlug).Scan(&tenantID)
|
||||
if err != nil {
|
||||
if err == pgx.ErrNoRows {
|
||||
return nil, fmt.Errorf("auth: tenant not found: %s", tenantSlug)
|
||||
}
|
||||
return nil, fmt.Errorf("auth: resolve tenant: %w", err)
|
||||
}
|
||||
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(password), 12)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("auth: hash password: %w", err)
|
||||
}
|
||||
|
||||
row := s.pool.QueryRow(ctx,
|
||||
`insert into users(tenant_id, username, password_hash, role)
|
||||
values($1, $2, $3, 'screen_user')
|
||||
returning id, tenant_id, $4::text, username, password_hash, role, created_at`,
|
||||
tenantID, username, string(hash), tenantSlug)
|
||||
u, err := scanUserWithSlug(row)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("auth: create screen user: %w", err)
|
||||
}
|
||||
return u, nil
|
||||
}
|
||||
|
||||
// ListScreenUsers returns all users with role 'screen_user' for the given tenant.
|
||||
func (s *AuthStore) ListScreenUsers(ctx context.Context, tenantSlug string) ([]*User, error) {
|
||||
rows, err := s.pool.Query(ctx,
|
||||
`select u.id, u.tenant_id, coalesce(t.slug, ''), u.username, u.password_hash, u.role, u.created_at
|
||||
from users u
|
||||
left join tenants t on t.id = u.tenant_id
|
||||
where t.slug = $1 and u.role = 'screen_user'
|
||||
order by u.username`, tenantSlug)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("auth: list screen users: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
var out []*User
|
||||
for rows.Next() {
|
||||
u, err := scanUserWithSlug(rows)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out = append(out, u)
|
||||
}
|
||||
return out, rows.Err()
|
||||
}
|
||||
|
||||
// DeleteUser removes a user and all their session + screen permission records (CASCADE).
|
||||
// It refuses to delete users with role 'admin' to prevent lockout.
|
||||
func (s *AuthStore) DeleteUser(ctx context.Context, userID string) error {
|
||||
tag, err := s.pool.Exec(ctx,
|
||||
`delete from users where id = $1 and role != 'admin'`, userID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("auth: delete user: %w", err)
|
||||
}
|
||||
if tag.RowsAffected() == 0 {
|
||||
return fmt.Errorf("auth: delete user: not found or is admin")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// scan helpers
|
||||
// ------------------------------------------------------------------
|
||||
|
||||
func scanUser(row interface {
|
||||
Scan(dest ...any) error
|
||||
}) (*User, error) {
|
||||
var u User
|
||||
err := row.Scan(&u.ID, &u.TenantID, &u.Username, &u.PasswordHash, &u.Role, &u.CreatedAt)
|
||||
if err != nil {
|
||||
if err == pgx.ErrNoRows {
|
||||
return nil, pgx.ErrNoRows
|
||||
}
|
||||
return nil, fmt.Errorf("scan user: %w", err)
|
||||
}
|
||||
return &u, nil
|
||||
}
|
||||
|
||||
// scanUserWithSlug scans a row that includes tenant_slug as the third column.
|
||||
func scanUserWithSlug(row interface {
|
||||
Scan(dest ...any) error
|
||||
|
|
|
|||
|
|
@ -53,13 +53,6 @@ type Playlist struct {
|
|||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// ScreenUserEntry is a lightweight view used when listing users assigned to a screen.
|
||||
type ScreenUserEntry struct {
|
||||
ID string `json:"id"`
|
||||
Username string `json:"username"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
}
|
||||
|
||||
type PlaylistItem struct {
|
||||
ID string `json:"id"`
|
||||
PlaylistID string `json:"playlist_id"`
|
||||
|
|
@ -210,90 +203,6 @@ func (s *ScreenStore) Delete(ctx context.Context, id string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
// GetAccessibleScreens returns all screens that userID has explicit access to
|
||||
// via user_screen_permissions.
|
||||
func (s *ScreenStore) GetAccessibleScreens(ctx context.Context, userID string) ([]*Screen, error) {
|
||||
rows, err := s.pool.Query(ctx,
|
||||
`select sc.id, sc.tenant_id, sc.slug, sc.name, sc.orientation, sc.created_at
|
||||
from screens sc
|
||||
join user_screen_permissions usp on usp.screen_id = sc.id
|
||||
where usp.user_id = $1
|
||||
order by sc.name`, userID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("screens: get accessible: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
var out []*Screen
|
||||
for rows.Next() {
|
||||
sc, err := scanScreen(rows)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out = append(out, sc)
|
||||
}
|
||||
return out, rows.Err()
|
||||
}
|
||||
|
||||
// HasUserScreenAccess returns true when userID has an explicit permission entry
|
||||
// for screenID in user_screen_permissions.
|
||||
func (s *ScreenStore) HasUserScreenAccess(ctx context.Context, userID, screenID string) (bool, error) {
|
||||
var ok bool
|
||||
err := s.pool.QueryRow(ctx,
|
||||
`select exists(
|
||||
select 1 from user_screen_permissions
|
||||
where user_id = $1 and screen_id = $2
|
||||
)`, userID, screenID).Scan(&ok)
|
||||
return ok, err
|
||||
}
|
||||
|
||||
// AddUserToScreen creates a permission entry granting userID access to screenID.
|
||||
// Silently succeeds if the entry already exists (ON CONFLICT DO NOTHING).
|
||||
func (s *ScreenStore) AddUserToScreen(ctx context.Context, userID, screenID string) error {
|
||||
_, err := s.pool.Exec(ctx,
|
||||
`insert into user_screen_permissions(user_id, screen_id)
|
||||
values($1, $2)
|
||||
on conflict (user_id, screen_id) do nothing`,
|
||||
userID, screenID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("screens: add user to screen: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RemoveUserFromScreen deletes the permission entry for userID / screenID.
|
||||
func (s *ScreenStore) RemoveUserFromScreen(ctx context.Context, userID, screenID string) error {
|
||||
_, err := s.pool.Exec(ctx,
|
||||
`delete from user_screen_permissions where user_id = $1 and screen_id = $2`,
|
||||
userID, screenID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("screens: remove user from screen: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetScreenUsers returns all users that have explicit access to screenID.
|
||||
func (s *ScreenStore) GetScreenUsers(ctx context.Context, screenID string) ([]*ScreenUserEntry, error) {
|
||||
rows, err := s.pool.Query(ctx,
|
||||
`select u.id, u.username, u.created_at
|
||||
from users u
|
||||
join user_screen_permissions usp on usp.user_id = u.id
|
||||
where usp.screen_id = $1
|
||||
order by u.username`, screenID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("screens: get screen users: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
var out []*ScreenUserEntry
|
||||
for rows.Next() {
|
||||
var e ScreenUserEntry
|
||||
if err := rows.Scan(&e.ID, &e.Username, &e.CreatedAt); err != nil {
|
||||
return nil, fmt.Errorf("scan screen user entry: %w", err)
|
||||
}
|
||||
out = append(out, &e)
|
||||
}
|
||||
return out, rows.Err()
|
||||
}
|
||||
|
||||
func scanScreen(row interface {
|
||||
Scan(dest ...any) error
|
||||
}) (*Screen, error) {
|
||||
|
|
@ -396,18 +305,6 @@ func (s *PlaylistStore) GetByScreen(ctx context.Context, screenID string) (*Play
|
|||
return scanPlaylist(row)
|
||||
}
|
||||
|
||||
// GetByItemID returns the playlist that contains the given playlist item.
|
||||
// Used for tenant-isolation checks (K4).
|
||||
func (s *PlaylistStore) GetByItemID(ctx context.Context, itemID string) (*Playlist, error) {
|
||||
row := s.pool.QueryRow(ctx,
|
||||
`select pl.id, pl.tenant_id, pl.screen_id, pl.name, pl.is_active,
|
||||
pl.default_duration_seconds, pl.created_at, pl.updated_at
|
||||
from playlists pl
|
||||
join playlist_items pi on pi.playlist_id = pl.id
|
||||
where pi.id = $1`, itemID)
|
||||
return scanPlaylist(row)
|
||||
}
|
||||
|
||||
func (s *PlaylistStore) UpdateDefaultDuration(ctx context.Context, id string, seconds int) error {
|
||||
_, err := s.pool.Exec(ctx,
|
||||
`update playlists set default_duration_seconds=$2, updated_at=now() where id=$1`, id, seconds)
|
||||
|
|
@ -476,20 +373,23 @@ func (s *PlaylistStore) ListActiveItems(ctx context.Context, playlistID string)
|
|||
}
|
||||
|
||||
func (s *PlaylistStore) AddItem(ctx context.Context, playlistID, mediaAssetID, itemType, src, title string, durationSeconds int, validFrom, validUntil *time.Time) (*PlaylistItem, error) {
|
||||
// Place at end of list.
|
||||
var maxIdx int
|
||||
s.pool.QueryRow(ctx,
|
||||
`select coalesce(max(order_index)+1, 0) from playlist_items where playlist_id=$1`, playlistID,
|
||||
).Scan(&maxIdx) //nolint:errcheck
|
||||
|
||||
var mediaID *string
|
||||
if mediaAssetID != "" {
|
||||
mediaID = &mediaAssetID
|
||||
}
|
||||
|
||||
// W1: Atomare Subquery statt 2 separater Queries — verhindert Race Condition bei order_index.
|
||||
row := s.pool.QueryRow(ctx,
|
||||
`insert into playlist_items(playlist_id, media_asset_id, order_index, type, src, title, duration_seconds, valid_from, valid_until)
|
||||
values($1,$2,
|
||||
(select coalesce(max(order_index)+1, 0) from playlist_items where playlist_id=$1),
|
||||
$3,$4,$5,$6,$7,$8)
|
||||
values($1,$2,$3,$4,$5,$6,$7,$8,$9)
|
||||
returning id, playlist_id, coalesce(media_asset_id,''), order_index, type, src,
|
||||
coalesce(title,''), duration_seconds, valid_from, valid_until, enabled, created_at`,
|
||||
playlistID, mediaID, itemType, src, title, durationSeconds, validFrom, validUntil)
|
||||
playlistID, mediaID, maxIdx, itemType, src, title, durationSeconds, validFrom, validUntil)
|
||||
return scanPlaylistItem(row)
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue