Security-Review + Phase 6: CSRF, Rate-Limiting, Tenant-Isolation, Screenshot, Ansible
### Security-Fixes (K1–K6, W1–W4, W7, N1, N5–N6, V1, V5–V7)
- K1: CSRF-Schutz via Double-Submit-Cookie (httpapi/csrf.go + csrf_helpers.go)
- K2: requireScreenAccess() in allen manage-Handlern (Tenant-Isolation)
- K3: Tenant-Check bei DELETE /api/v1/media/{id}
- K4: requirePlaylistAccess() + GetByItemID() für JSON-API Playlist-Routen
- K5: Admin-Passwort nur noch als [gesetzt] geloggt
- K6: POST /api/v1/screens/register mit Pre-Shared-Secret (MORZ_INFOBOARD_REGISTER_SECRET)
- W1: Race Condition bei order_index behoben (atomare Subquery in AddItem)
- W2: Graceful Shutdown mit 15s Timeout auf SIGTERM/SIGINT
- W3: http.MaxBytesReader (512 MB) in allen Upload-Handlern
- W4: err.Error() nicht mehr an den Client
- W7: Template-Execution via bytes.Buffer (kein partial write bei Fehler)
- N1: Rate-Limiting auf /login (5 Versuche/Minute pro IP, httpapi/ratelimit.go)
- N5: Directory-Listing auf /uploads/ deaktiviert (neuteredFileSystem)
- N6: Uploads nach Tenant getrennt (uploads/{tenantSlug}/)
- V1: Upload-Logik konsolidiert in internal/fileutil/fileutil.go
- V5: Cookie-Name als Konstante reqcontext.SessionCookieName
- V6: Strukturiertes Logging mit log/slog + JSON-Handler
- V7: DB-Pool wird im Graceful-Shutdown geschlossen
### Phase 6: Screenshot-Erzeugung
- player/agent/internal/screenshot/screenshot.go erstellt
- Integration in app.go mit MORZ_INFOBOARD_SCREENSHOT_EVERY Config
### UX: PDF.js Integration
- pdf.min.js + pdf.worker.min.js als lokale Assets eingebettet
- Automatisches Seitendurchblättern im Player
### Ansible: Neue Rollen
- signage_base, signage_server, signage_provision erstellt
- inventory.yml und site.yml erweitert
### Konzept-Docs
- GRUPPEN-KONZEPT.md, KAMPAGNEN-AKTIVIERUNG.md, MONITORING-KONZEPT.md
- PROVISION-KONZEPT.md, TEMPLATE-EDITOR.md, WATCHDOG-KONZEPT.md
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
029fa39ffd
commit
dd3ec070f7
47 changed files with 4530 additions and 135 deletions
60
TODO.md
60
TODO.md
|
|
@ -47,7 +47,7 @@
|
||||||
- [x] Verzeichnislayout auf dem Player festlegen
|
- [x] Verzeichnislayout auf dem Player festlegen
|
||||||
- [x] `player-agent` fachlich zuschneiden
|
- [x] `player-agent` fachlich zuschneiden
|
||||||
- [x] `player-ui` fachlich zuschneiden (lokale Kiosk-Seite mit Splash + Sysinfo-Overlay)
|
- [x] `player-ui` fachlich zuschneiden (lokale Kiosk-Seite mit Splash + Sysinfo-Overlay)
|
||||||
- [ ] Watchdog-Konzept fuer Browser und Agent definieren
|
- [x] Watchdog-Konzept fuer Browser und Agent definieren
|
||||||
- [x] Offline-Overlay-Verhalten spezifizieren
|
- [x] Offline-Overlay-Verhalten spezifizieren
|
||||||
- [x] Fehlerbehandlung fuer Web-Inhalte und Timeouts ausarbeiten
|
- [x] Fehlerbehandlung fuer Web-Inhalte und Timeouts ausarbeiten
|
||||||
- [x] Display-Steuerung fuer An/Aus, Rotation und Neustart planen
|
- [x] Display-Steuerung fuer An/Aus, Rotation und Neustart planen
|
||||||
|
|
@ -62,12 +62,12 @@
|
||||||
- [x] Storage-Konzept fuer Uploads, Cache-Dateien und Screenshots festlegen
|
- [x] Storage-Konzept fuer Uploads, Cache-Dateien und Screenshots festlegen
|
||||||
- [x] Authentifizierungskonzept festlegen
|
- [x] Authentifizierungskonzept festlegen
|
||||||
- [x] Mandantentrennung im Datenmodell und in den APIs absichern
|
- [x] Mandantentrennung im Datenmodell und in den APIs absichern
|
||||||
- [ ] Logging- und Monitoring-Konzept definieren
|
- [x] Logging- und Monitoring-Konzept definieren
|
||||||
- [ ] Template-Editor fuer globale Kampagnen fachlich schneiden
|
- [x] Template-Editor fuer globale Kampagnen fachlich schneiden
|
||||||
- [ ] Aktivierungsoberflaeche fuer saisonale oder temporäre Kampagnen planen
|
- [x] Aktivierungsoberflaeche fuer saisonale oder temporäre Kampagnen planen
|
||||||
- [ ] Gruppierung oder Slot-Modell fuer monitoruebergreifende Layouts planen
|
- [x] Gruppierung oder Slot-Modell fuer monitoruebergreifende Layouts planen
|
||||||
- [x] Provisionierungs-UI fuer neue Screens fachlich und technisch schneiden
|
- [x] Provisionierungs-UI fuer neue Screens fachlich und technisch schneiden
|
||||||
- [ ] Jobrunner-Konzept fuer Ansible-gestuetzte Erstinstallation planen
|
- [x] Jobrunner-Konzept fuer Ansible-gestuetzte Erstinstallation planen
|
||||||
|
|
||||||
## Phase 5 - Prototyping
|
## Phase 5 - Prototyping
|
||||||
|
|
||||||
|
|
@ -89,18 +89,18 @@
|
||||||
- [x] Docker-Compose-Setup fuer den Server anlegen
|
- [x] Docker-Compose-Setup fuer den Server anlegen
|
||||||
- [x] systemd-Units fuer den Player erstellen
|
- [x] systemd-Units fuer den Player erstellen
|
||||||
- [x] Chromium-Kiosk-Startskript erstellen
|
- [x] Chromium-Kiosk-Startskript erstellen
|
||||||
- [ ] Screenshot-Erzeugung auf dem Player integrieren
|
- [x] Screenshot-Erzeugung auf dem Player integrieren
|
||||||
- [x] Heartbeat- und Statusmeldungen integrieren
|
- [x] Heartbeat- und Statusmeldungen integrieren
|
||||||
- [x] MQTT-Playlist-Change-Synchronisation mit Backend-Debounce (2s) und Agent-Debounce (3s) implementiert
|
- [x] MQTT-Playlist-Change-Synchronisation mit Backend-Debounce (2s) und Agent-Debounce (3s) implementiert
|
||||||
- [ ] Fehler- und Wiederanlaufverhalten verifizieren
|
- [ ] Fehler- und Wiederanlaufverhalten verifizieren
|
||||||
|
|
||||||
## Phase 7 - Ansible-Automatisierung
|
## Phase 7 - Ansible-Automatisierung
|
||||||
|
|
||||||
- [ ] Rolle `signage_base` erstellen
|
- [x] Rolle `signage_base` erstellen
|
||||||
- [x] Rolle `signage_player` erstellen
|
- [x] Rolle `signage_player` erstellen
|
||||||
- [x] Rolle `signage_display` erstellen
|
- [x] Rolle `signage_display` erstellen
|
||||||
- [ ] Rolle `signage_server` erstellen
|
- [x] Rolle `signage_server` erstellen
|
||||||
- [ ] Rolle `signage_provision` erstellen
|
- [x] Rolle `signage_provision` erstellen
|
||||||
- [x] Inventar-/Variablenmodell fuer mehrere Monitore entwerfen
|
- [x] Inventar-/Variablenmodell fuer mehrere Monitore entwerfen
|
||||||
- [x] Screen-spezifische Variablen wie `screen_id`, Rotation und Aufloesung abbilden
|
- [x] Screen-spezifische Variablen wie `screen_id`, Rotation und Aufloesung abbilden
|
||||||
- [x] Erstinstallation eines neuen Players automatisieren
|
- [x] Erstinstallation eines neuen Players automatisieren
|
||||||
|
|
@ -145,7 +145,7 @@
|
||||||
- [x] Screen-Online/Offline-Status in Admin-Tabelle anzeigen (aus /status-Endpoint befuellen)
|
- [x] Screen-Online/Offline-Status in Admin-Tabelle anzeigen (aus /status-Endpoint befuellen)
|
||||||
- [x] Playlist-Tabelle in overflow-x Wrapper einwickeln (Responsive auf kleinen Screens)
|
- [x] Playlist-Tabelle in overflow-x Wrapper einwickeln (Responsive auf kleinen Screens)
|
||||||
- [x] PDF-Darstellung: Sidebar und Toolbar im Chromium PDF-Viewer ausblenden (URL-Parameter navpanes=0, toolbar=0)
|
- [x] PDF-Darstellung: Sidebar und Toolbar im Chromium PDF-Viewer ausblenden (URL-Parameter navpanes=0, toolbar=0)
|
||||||
- [ ] PDF-Darstellung: PDF.js fuer automatisches Seitendurchblaettern integrieren
|
- [x] PDF-Darstellung: PDF.js fuer automatisches Seitendurchblaettern integrieren
|
||||||
|
|
||||||
### Mittlere Prioritaet
|
### Mittlere Prioritaet
|
||||||
|
|
||||||
|
|
@ -171,6 +171,44 @@
|
||||||
- [x] Fix: /api/startup-token setzt Cache-Control: no-store Header (Server + Client)
|
- [x] Fix: /api/startup-token setzt Cache-Control: no-store Header (Server + Client)
|
||||||
- [x] Fix: TestAssetsServed Nil-Dereferenz durch tote Goroutine behoben
|
- [x] Fix: TestAssetsServed Nil-Dereferenz durch tote Goroutine behoben
|
||||||
|
|
||||||
|
## Security & Code-Review (Opus, 2026-03-23)
|
||||||
|
|
||||||
|
### Kritisch — Sicherheitslücken
|
||||||
|
|
||||||
|
- [x] **K2** Tenant-Isolation für `/manage/{screenSlug}/*`: `requireScreenAccess()` in allen manage-Handlern
|
||||||
|
- [x] **K3** `DELETE /api/v1/media/{id}`: Tenant-Check via reqcontext.UserFromContext
|
||||||
|
- [x] **K4** JSON-API Playlist-Routen (`/items`, `/playlists/*/items`, `/order`, `/duration`): `requirePlaylistAccess()` + `GetByItemID()` im Store
|
||||||
|
- [x] **K1** CSRF-Schutz: Double-Submit-Cookie-Pattern (`httpapi/csrf.go`); JS-Injection in alle Templates; Middleware in Router
|
||||||
|
- [x] **K6** `POST /api/v1/screens/register`: Pre-Shared-Secret via `MORZ_INFOBOARD_REGISTER_SECRET` (Header `X-Register-Secret`); Player-Agent sendet Secret mit
|
||||||
|
- [x] **K5** Admin-Passwort aus Log entfernt — nur `[gesetzt]` wird geloggt
|
||||||
|
|
||||||
|
### Wichtig — Robustheit
|
||||||
|
|
||||||
|
- [x] **N5** Directory-Listing auf `/uploads/` deaktiviert via `neuteredFileSystem` (`httpapi/uploads.go`)
|
||||||
|
- [x] **N6** Uploads nach Tenant getrennt: `fileutil.SaveUploadedFile()` legt Dateien in `uploads/{tenantSlug}/` ab
|
||||||
|
- [x] **W1** Race Condition bei `order_index` behoben: atomare Subquery in `AddItem()`
|
||||||
|
- [x] **W2** Graceful Shutdown implementiert: `http.Server.Shutdown()` mit 15s Timeout auf SIGTERM/SIGINT
|
||||||
|
- [x] **W3** Upload mit `http.MaxBytesReader` begrenzt (512 MB) in allen drei Upload-Handlern
|
||||||
|
- [x] **W4** `err.Error()` nicht mehr an den Client — generische Fehlermeldungen, Details serverseitig
|
||||||
|
- [x] **W7** Template-Execution-Errors: `bytes.Buffer`-Rendering, erst bei Erfolg an Client senden (`renderTemplate()`)
|
||||||
|
|
||||||
|
### Verbesserung — Wartbarkeit
|
||||||
|
|
||||||
|
- [ ] **V3** Keine Tests für Auth, Middleware, Tenant-Handler (gesamter Phase-1-5-Code ohne Abdeckung)
|
||||||
|
- [x] **V1** Upload-Logik konsolidiert in `internal/fileutil/fileutil.go` (`SaveUploadedFile`)
|
||||||
|
- [x] **V5** Cookie-Name als Konstante `reqcontext.SessionCookieName` — manage/auth.go und middleware.go nutzen sie
|
||||||
|
- [x] **V6** Strukturiertes Logging: `log/slog` mit JSON-Handler in `main.go`; `app.go` nutzt `slog.Info/slog.Error`
|
||||||
|
- [x] **V7** DB-Pool wird im Graceful-Shutdown-Handler geschlossen (`a.dbPool.Close()`)
|
||||||
|
|
||||||
|
### Nice-to-have — Features
|
||||||
|
|
||||||
|
- [x] **N1** Rate-Limiting auf `/login`: In-Memory Sliding-Window (5 Versuche/Minute pro IP) via `httpapi/ratelimit.go`
|
||||||
|
- [ ] **N2** Passwort-Änderung / Self-Service-Reset
|
||||||
|
- [ ] **N3** Tenant-User-Management im Admin-UI
|
||||||
|
- [ ] **N4** Session-TTL via Config-Variable steuerbar (aktuell hardcoded 8h)
|
||||||
|
|
||||||
|
**Hinweis K6:** `MORZ_INFOBOARD_REGISTER_SECRET` muss in `server/.env` / `docker-compose.yml` und in der Player-Config (`MORZ_INFOBOARD_REGISTER_SECRET` oder `register_secret` in `config.json`) identisch gesetzt werden. Wenn die Variable leer ist, bleibt der Endpoint offen (Rückwärtskompatibilität).
|
||||||
|
|
||||||
## Querschnittsthemen
|
## Querschnittsthemen
|
||||||
|
|
||||||
- [ ] Datensicherung fuer Datenbank und Medien einplanen
|
- [ ] Datensicherung fuer Datenbank und Medien einplanen
|
||||||
|
|
|
||||||
|
|
@ -5,3 +5,8 @@ all:
|
||||||
hosts:
|
hosts:
|
||||||
info10:
|
info10:
|
||||||
info01-dev:
|
info01-dev:
|
||||||
|
signage_servers:
|
||||||
|
hosts:
|
||||||
|
dockerbox:
|
||||||
|
# ansible_host: 10.0.0.70
|
||||||
|
# ansible_user: admin
|
||||||
|
|
|
||||||
12
ansible/roles/signage_base/defaults/main.yml
Normal file
12
ansible/roles/signage_base/defaults/main.yml
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
signage_user: morz
|
||||||
|
signage_timezone: "Europe/Berlin"
|
||||||
|
|
||||||
|
signage_base_packages:
|
||||||
|
- curl
|
||||||
|
- ca-certificates
|
||||||
|
- rsync
|
||||||
|
- htop
|
||||||
|
- vim-tiny
|
||||||
|
- bash-completion
|
||||||
|
- ntp
|
||||||
12
ansible/roles/signage_base/handlers/main.yml
Normal file
12
ansible/roles/signage_base/handlers/main.yml
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
- name: Restart cron
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: cron
|
||||||
|
state: restarted
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Restart journald
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: systemd-journald
|
||||||
|
state: restarted
|
||||||
|
become: true
|
||||||
55
ansible/roles/signage_base/tasks/main.yml
Normal file
55
ansible/roles/signage_base/tasks/main.yml
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
---
|
||||||
|
- name: Update apt cache and upgrade installed packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
upgrade: dist
|
||||||
|
cache_valid_time: 3600
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Install base packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: "{{ signage_base_packages }}"
|
||||||
|
state: present
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Set system timezone
|
||||||
|
community.general.timezone:
|
||||||
|
name: "{{ signage_timezone }}"
|
||||||
|
become: true
|
||||||
|
notify: Restart cron
|
||||||
|
|
||||||
|
- name: Ensure NTP service is enabled and running
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: ntp
|
||||||
|
enabled: true
|
||||||
|
state: started
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Ensure journald drop-in directory exists
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/systemd/journald.conf.d
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0755"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Configure journald volatile storage (RAM only, schont SD-Karte)
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/systemd/journald.conf.d/morz-volatile.conf
|
||||||
|
content: |
|
||||||
|
[Journal]
|
||||||
|
Storage=volatile
|
||||||
|
RuntimeMaxUse=20M
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
become: true
|
||||||
|
notify: Restart journald
|
||||||
|
|
||||||
|
- name: Ensure signage user exists
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: "{{ signage_user }}"
|
||||||
|
create_home: true
|
||||||
|
state: present
|
||||||
|
become: true
|
||||||
16
ansible/roles/signage_provision/defaults/main.yml
Normal file
16
ansible/roles/signage_provision/defaults/main.yml
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
---
|
||||||
|
# Admin token used to authenticate against the server API
|
||||||
|
# Must be overridden in group_vars, host_vars or vault.
|
||||||
|
signage_admin_token: ""
|
||||||
|
|
||||||
|
# Server base URL reachable from the Ansible controller
|
||||||
|
signage_server_base_url: "http://10.0.0.70:8080"
|
||||||
|
|
||||||
|
# SSH public key to deploy to the signage user
|
||||||
|
signage_ssh_public_key: ""
|
||||||
|
|
||||||
|
# User that Ansible should permanently manage (after bootstrapping)
|
||||||
|
signage_user: morz
|
||||||
|
|
||||||
|
# Config dir on the target (shared with signage_player role)
|
||||||
|
signage_config_dir: /etc/signage
|
||||||
3
ansible/roles/signage_provision/handlers/main.yml
Normal file
3
ansible/roles/signage_provision/handlers/main.yml
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
---
|
||||||
|
# No handlers required for provisioning role.
|
||||||
|
# Handlers are intentionally empty – provisioning tasks are one-shot.
|
||||||
57
ansible/roles/signage_provision/tasks/main.yml
Normal file
57
ansible/roles/signage_provision/tasks/main.yml
Normal file
|
|
@ -0,0 +1,57 @@
|
||||||
|
---
|
||||||
|
- name: Ensure signage user exists
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: "{{ signage_user }}"
|
||||||
|
create_home: true
|
||||||
|
state: present
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Ensure .ssh directory exists for signage user
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "/home/{{ signage_user }}/.ssh"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ signage_user }}"
|
||||||
|
group: "{{ signage_user }}"
|
||||||
|
mode: "0700"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Deploy SSH public key for signage user
|
||||||
|
ansible.builtin.authorized_key:
|
||||||
|
user: "{{ signage_user }}"
|
||||||
|
key: "{{ signage_ssh_public_key }}"
|
||||||
|
state: present
|
||||||
|
become: true
|
||||||
|
when: signage_ssh_public_key | length > 0
|
||||||
|
|
||||||
|
- name: Ensure config directory exists
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ signage_config_dir }}"
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0755"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Deploy vars.yml template for player config
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: vars.yml.j2
|
||||||
|
dest: "{{ signage_config_dir }}/vars.yml"
|
||||||
|
owner: root
|
||||||
|
group: "{{ signage_user }}"
|
||||||
|
mode: "0640"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Register screen at server via API
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ signage_server_base_url }}/api/v1/screens/register"
|
||||||
|
method: POST
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
slug: "{{ screen_id }}"
|
||||||
|
name: "{{ screen_name | default(screen_id) }}"
|
||||||
|
orientation: "{{ screen_orientation | default('landscape') }}"
|
||||||
|
headers:
|
||||||
|
Content-Type: application/json
|
||||||
|
status_code: [200, 201]
|
||||||
|
delegate_to: localhost
|
||||||
|
when: screen_id is defined
|
||||||
16
ansible/roles/signage_provision/templates/vars.yml.j2
Normal file
16
ansible/roles/signage_provision/templates/vars.yml.j2
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
# Managed by Ansible – signage_provision role
|
||||||
|
# Do not edit manually on the device.
|
||||||
|
|
||||||
|
screen_id: "{{ screen_id }}"
|
||||||
|
screen_name: "{{ screen_name | default(screen_id) }}"
|
||||||
|
screen_orientation: "{{ screen_orientation | default('landscape') }}"
|
||||||
|
|
||||||
|
morz_server_base_url: "{{ morz_server_base_url | default(signage_server_base_url) }}"
|
||||||
|
morz_mqtt_broker: "{{ morz_mqtt_broker | default('') }}"
|
||||||
|
morz_mqtt_username: "{{ morz_mqtt_username | default('') }}"
|
||||||
|
morz_mqtt_password: "{{ morz_mqtt_password | default('') }}"
|
||||||
|
|
||||||
|
morz_heartbeat_every_seconds: {{ morz_heartbeat_every_seconds | default(30) }}
|
||||||
|
morz_status_report_every_seconds: {{ morz_status_report_every_seconds | default(60) }}
|
||||||
|
morz_player_listen_addr: "{{ morz_player_listen_addr | default('127.0.0.1:8090') }}"
|
||||||
|
morz_player_content_url: "{{ morz_player_content_url | default('') }}"
|
||||||
26
ansible/roles/signage_server/defaults/main.yml
Normal file
26
ansible/roles/signage_server/defaults/main.yml
Normal file
|
|
@ -0,0 +1,26 @@
|
||||||
|
---
|
||||||
|
signage_server_deploy_dir: /srv/docker/info-board-neu
|
||||||
|
signage_server_data_dir: /srv/docker/info-board-neu/data
|
||||||
|
|
||||||
|
# Backend
|
||||||
|
morz_http_addr: ":8080"
|
||||||
|
morz_database_url: "postgres://morz_infoboard:morz_infoboard@db:5432/morz_infoboard?sslmode=disable"
|
||||||
|
morz_upload_dir: /app/uploads
|
||||||
|
morz_status_store_path: /app/data/status
|
||||||
|
morz_default_tenant: morz
|
||||||
|
morz_dev_mode: "false"
|
||||||
|
|
||||||
|
# Admin password – must be overridden in group_vars or vault
|
||||||
|
morz_admin_password: ""
|
||||||
|
|
||||||
|
# MQTT
|
||||||
|
morz_mqtt_broker: ""
|
||||||
|
morz_mqtt_username: ""
|
||||||
|
morz_mqtt_password: ""
|
||||||
|
|
||||||
|
# Firewall
|
||||||
|
signage_server_ufw_enabled: true
|
||||||
|
signage_server_ufw_allow_https: true
|
||||||
|
signage_server_ufw_allow_mqtt: true
|
||||||
|
signage_server_mqtt_port: "1883"
|
||||||
|
signage_server_https_port: "443"
|
||||||
7
ansible/roles/signage_server/handlers/main.yml
Normal file
7
ansible/roles/signage_server/handlers/main.yml
Normal file
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
- name: Restart morz-server stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: "{{ signage_server_deploy_dir }}"
|
||||||
|
state: present
|
||||||
|
pull: always
|
||||||
|
become: true
|
||||||
130
ansible/roles/signage_server/tasks/main.yml
Normal file
130
ansible/roles/signage_server/tasks/main.yml
Normal file
|
|
@ -0,0 +1,130 @@
|
||||||
|
---
|
||||||
|
- name: Install Docker dependencies
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- ca-certificates
|
||||||
|
- curl
|
||||||
|
- gnupg
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Create Docker apt keyring directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/keyrings
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0755"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Add Docker GPG key
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: https://download.docker.com/linux/debian/gpg
|
||||||
|
dest: /etc/apt/keyrings/docker.asc
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Add Docker apt repository
|
||||||
|
ansible.builtin.apt_repository:
|
||||||
|
repo: >-
|
||||||
|
deb [arch={{ ansible_architecture | replace('x86_64', 'amd64') | replace('aarch64', 'arm64') }}
|
||||||
|
signed-by=/etc/apt/keyrings/docker.asc]
|
||||||
|
https://download.docker.com/linux/debian
|
||||||
|
{{ ansible_distribution_release }} stable
|
||||||
|
state: present
|
||||||
|
filename: docker
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Install Docker Engine and Compose plugin
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- docker-ce
|
||||||
|
- docker-ce-cli
|
||||||
|
- containerd.io
|
||||||
|
- docker-buildx-plugin
|
||||||
|
- docker-compose-plugin
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Ensure Docker service is enabled and running
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: docker
|
||||||
|
enabled: true
|
||||||
|
state: started
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Create server deploy directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ signage_server_deploy_dir }}"
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0750"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Create server data directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ signage_server_data_dir }}"
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0750"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Create uploads directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ signage_server_deploy_dir }}/uploads"
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0750"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: docker-compose.yml.j2
|
||||||
|
dest: "{{ signage_server_deploy_dir }}/docker-compose.yml"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0640"
|
||||||
|
become: true
|
||||||
|
notify: Restart morz-server stack
|
||||||
|
|
||||||
|
- name: Deploy server environment file
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: env.j2
|
||||||
|
dest: "{{ signage_server_deploy_dir }}/.env"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0600"
|
||||||
|
become: true
|
||||||
|
notify: Restart morz-server stack
|
||||||
|
|
||||||
|
- name: Allow HTTPS through ufw
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "{{ signage_server_https_port }}"
|
||||||
|
proto: tcp
|
||||||
|
comment: morz-infoboard HTTPS
|
||||||
|
become: true
|
||||||
|
when: signage_server_ufw_enabled and signage_server_ufw_allow_https
|
||||||
|
|
||||||
|
- name: Allow MQTT through ufw
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "{{ signage_server_mqtt_port }}"
|
||||||
|
proto: tcp
|
||||||
|
comment: morz-infoboard MQTT
|
||||||
|
become: true
|
||||||
|
when: signage_server_ufw_enabled and signage_server_ufw_allow_mqtt
|
||||||
|
|
||||||
|
- name: Pull and start morz-server stack
|
||||||
|
community.docker.docker_compose_v2:
|
||||||
|
project_src: "{{ signage_server_deploy_dir }}"
|
||||||
|
state: present
|
||||||
|
pull: always
|
||||||
|
become: true
|
||||||
58
ansible/roles/signage_server/templates/docker-compose.yml.j2
Normal file
58
ansible/roles/signage_server/templates/docker-compose.yml.j2
Normal file
|
|
@ -0,0 +1,58 @@
|
||||||
|
---
|
||||||
|
# Managed by Ansible – signage_server role
|
||||||
|
# Do not edit manually on the server.
|
||||||
|
|
||||||
|
services:
|
||||||
|
backend:
|
||||||
|
image: git.az-it.net/az/morz-infoboard/backend:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8080:8080"
|
||||||
|
environment:
|
||||||
|
MORZ_INFOBOARD_HTTP_ADDR: "${MORZ_HTTP_ADDR}"
|
||||||
|
MORZ_INFOBOARD_DATABASE_URL: "${MORZ_DATABASE_URL}"
|
||||||
|
MORZ_INFOBOARD_UPLOAD_DIR: /app/uploads
|
||||||
|
MORZ_INFOBOARD_STATUS_STORE_PATH: /app/data/status
|
||||||
|
MORZ_INFOBOARD_MQTT_BROKER: "${MORZ_MQTT_BROKER}"
|
||||||
|
MORZ_INFOBOARD_MQTT_USERNAME: "${MORZ_MQTT_USERNAME}"
|
||||||
|
MORZ_INFOBOARD_MQTT_PASSWORD: "${MORZ_MQTT_PASSWORD}"
|
||||||
|
MORZ_INFOBOARD_ADMIN_PASSWORD: "${MORZ_ADMIN_PASSWORD}"
|
||||||
|
MORZ_INFOBOARD_DEFAULT_TENANT: "${MORZ_DEFAULT_TENANT}"
|
||||||
|
MORZ_INFOBOARD_DEV_MODE: "${MORZ_DEV_MODE}"
|
||||||
|
volumes:
|
||||||
|
- ./uploads:/app/uploads
|
||||||
|
- ./data:/app/data
|
||||||
|
depends_on:
|
||||||
|
db:
|
||||||
|
condition: service_healthy
|
||||||
|
|
||||||
|
db:
|
||||||
|
image: postgres:17-alpine
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: morz_infoboard
|
||||||
|
POSTGRES_PASSWORD: "${MORZ_DB_PASSWORD}"
|
||||||
|
POSTGRES_DB: morz_infoboard
|
||||||
|
volumes:
|
||||||
|
- db_data:/var/lib/postgresql/data
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U morz_infoboard"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
mqtt:
|
||||||
|
image: eclipse-mosquitto:2
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "1883:1883"
|
||||||
|
- "9001:9001"
|
||||||
|
volumes:
|
||||||
|
- ./mosquitto/config:/mosquitto/config:ro
|
||||||
|
- mosquitto_data:/mosquitto/data
|
||||||
|
- mosquitto_log:/mosquitto/log
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
db_data:
|
||||||
|
mosquitto_data:
|
||||||
|
mosquitto_log:
|
||||||
16
ansible/roles/signage_server/templates/env.j2
Normal file
16
ansible/roles/signage_server/templates/env.j2
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
# Managed by Ansible – signage_server role
|
||||||
|
# Do not edit manually on the server.
|
||||||
|
|
||||||
|
MORZ_HTTP_ADDR={{ morz_http_addr }}
|
||||||
|
MORZ_DATABASE_URL={{ morz_database_url }}
|
||||||
|
MORZ_DB_PASSWORD={{ morz_db_password | default('morz_infoboard') }}
|
||||||
|
MORZ_UPLOAD_DIR={{ morz_upload_dir }}
|
||||||
|
MORZ_STATUS_STORE_PATH={{ morz_status_store_path }}
|
||||||
|
MORZ_DEFAULT_TENANT={{ morz_default_tenant }}
|
||||||
|
MORZ_DEV_MODE={{ morz_dev_mode }}
|
||||||
|
|
||||||
|
MORZ_ADMIN_PASSWORD={{ morz_admin_password }}
|
||||||
|
|
||||||
|
MORZ_MQTT_BROKER={{ morz_mqtt_broker }}
|
||||||
|
MORZ_MQTT_USERNAME={{ morz_mqtt_username }}
|
||||||
|
MORZ_MQTT_PASSWORD={{ morz_mqtt_password }}
|
||||||
|
|
@ -1,7 +1,33 @@
|
||||||
---
|
---
|
||||||
|
# Provision a fresh player (run once per new screen)
|
||||||
|
- name: Provision new Signage Player
|
||||||
|
hosts: signage_players
|
||||||
|
gather_facts: false
|
||||||
|
tags: [provision]
|
||||||
|
roles:
|
||||||
|
- signage_provision
|
||||||
|
|
||||||
|
# Base system setup for all signage nodes
|
||||||
|
- name: Base setup for Signage Players
|
||||||
|
hosts: signage_players
|
||||||
|
gather_facts: true
|
||||||
|
tags: [base, player]
|
||||||
|
roles:
|
||||||
|
- signage_base
|
||||||
|
|
||||||
|
# Deploy Morz Infoboard Player Agent and Kiosk Display
|
||||||
- name: Deploy Morz Infoboard Player Agent
|
- name: Deploy Morz Infoboard Player Agent
|
||||||
hosts: signage_players
|
hosts: signage_players
|
||||||
gather_facts: false
|
gather_facts: false
|
||||||
|
tags: [player]
|
||||||
roles:
|
roles:
|
||||||
- signage_player
|
- signage_player
|
||||||
- signage_display
|
- signage_display
|
||||||
|
|
||||||
|
# Deploy Morz Infoboard Central Server
|
||||||
|
- name: Deploy Morz Infoboard Central Server
|
||||||
|
hosts: signage_servers
|
||||||
|
gather_facts: true
|
||||||
|
tags: [server]
|
||||||
|
roles:
|
||||||
|
- signage_server
|
||||||
|
|
|
||||||
535
docs/GRUPPEN-KONZEPT.md
Normal file
535
docs/GRUPPEN-KONZEPT.md
Normal file
|
|
@ -0,0 +1,535 @@
|
||||||
|
# Info-Board Neu - Gruppierungs- und Slot-Modell fuer monitoruebergreifende Layouts
|
||||||
|
|
||||||
|
## Ziel
|
||||||
|
|
||||||
|
Dieses Dokument definiert, wie Screens in Gruppen und Slots organisiert werden.
|
||||||
|
|
||||||
|
Gruppen und Slots sind notwendig fuer:
|
||||||
|
|
||||||
|
- **Massenaktionen** — mehrere Screens mit einer Kampagne ansprechen
|
||||||
|
- **Monitorwaende** — Schriftzuege und Layouts auf mehrere Screens verteilen
|
||||||
|
- **zukuenftige Skalierbarkeit** — neue Displays ohne Neustrukturierung hinzufuegen
|
||||||
|
|
||||||
|
Siehe auch `docs/TEMPLATE-KONZEPT.md` fuer Template-Typen, die Gruppen/Slots verwenden.
|
||||||
|
|
||||||
|
## 1. Screen-Gruppen
|
||||||
|
|
||||||
|
### Konzept
|
||||||
|
|
||||||
|
Eine Gruppe ist eine semantische Zusammenfassung mehrerer Screens.
|
||||||
|
|
||||||
|
**Beispiele:**
|
||||||
|
|
||||||
|
- `all` — alle Screens im System
|
||||||
|
- `wall-all` — alle 9 Infowand-Screens
|
||||||
|
- `wall-row-1` — die 3 Screens der ersten Reihe
|
||||||
|
- `wall-row-2` — die 3 Screens der zweiten Reihe
|
||||||
|
- `single-all` — alle Einzelanzeigen (z.B. Vertretungsplan-Displays)
|
||||||
|
- `outdoor` — alle Aussenanzeigetafeln
|
||||||
|
|
||||||
|
### Typen von Gruppen
|
||||||
|
|
||||||
|
#### Physische Gruppen
|
||||||
|
|
||||||
|
Spiegeln die **reale Anordnung** wider:
|
||||||
|
|
||||||
|
- `wall-all` — alle Displays einer Infowand
|
||||||
|
- `wall-row-1`, `wall-row-2`, `wall-row-3` — Reihen einer Wand
|
||||||
|
- `wall-column-1`, `wall-column-2`, `wall-column-3` — Spalten einer Wand
|
||||||
|
|
||||||
|
#### Funktionale Gruppen
|
||||||
|
|
||||||
|
Spiegeln den **Verwendungszweck** wider:
|
||||||
|
|
||||||
|
- `main-hall-all` — alle Displays im Hauptkorridor
|
||||||
|
- `cafeteria-all` — alle Displays in der Kaffeteria
|
||||||
|
- `info-all` — alle Informationsanzeigen
|
||||||
|
|
||||||
|
#### Typen-Gruppen
|
||||||
|
|
||||||
|
Spiegeln das **Geraetemodell** wider:
|
||||||
|
|
||||||
|
- `portrait-all` — alle Displays im Hochformat
|
||||||
|
- `landscape-all` — alle Displays im Querformat
|
||||||
|
- `4k-displays` — nur 4K-Monitore
|
||||||
|
|
||||||
|
#### Tenant-Gruppen (Phase 2)
|
||||||
|
|
||||||
|
Spiegeln die **Mandanten-Zugehoerigkeit** wider:
|
||||||
|
|
||||||
|
- `tenant-xyz-all` — alle Displays fuer Mandant XYZ
|
||||||
|
- `tenant-xyz-public` — nur oeffentliche Displays des Mandants
|
||||||
|
|
||||||
|
### Hierarchische Struktur
|
||||||
|
|
||||||
|
Gruppen koennen verschachtelt sein:
|
||||||
|
|
||||||
|
```
|
||||||
|
all
|
||||||
|
├── wall-all
|
||||||
|
│ ├── wall-row-1
|
||||||
|
│ │ ├── info01
|
||||||
|
│ │ ├── info02
|
||||||
|
│ │ └── info03
|
||||||
|
│ ├── wall-row-2
|
||||||
|
│ │ ├── info04
|
||||||
|
│ │ ├── info05
|
||||||
|
│ │ └── info06
|
||||||
|
│ └── wall-row-3
|
||||||
|
│ ├── info07
|
||||||
|
│ ├── info08
|
||||||
|
│ └── info09
|
||||||
|
├── single-all
|
||||||
|
│ ├── info10 (Vertretungsplan 1)
|
||||||
|
│ └── info11 (Vertretungsplan 2)
|
||||||
|
└── fallback-displays
|
||||||
|
└── [none currently]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Automatische Inferenz:**
|
||||||
|
|
||||||
|
Ein Screen kann in mehreren Gruppen sein:
|
||||||
|
|
||||||
|
```
|
||||||
|
info01:
|
||||||
|
- all
|
||||||
|
- wall-all
|
||||||
|
- wall-row-1
|
||||||
|
- portrait-all
|
||||||
|
- online-displays (automatisch basierend auf Status)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Slot-Modell
|
||||||
|
|
||||||
|
### Konzept
|
||||||
|
|
||||||
|
Slots beschreiben **feste Positionen innerhalb eines Layouts**.
|
||||||
|
|
||||||
|
Sie werden hauptsaechlich fuer `message_wall`-Templates verwendet, um Ausschnitte von Grossmotiven auf einzelne Screens zu verteilen.
|
||||||
|
|
||||||
|
**Beispiel: 3x3 Infowand**
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────┐
|
||||||
|
│ [0,0] [0,1] [0,2] │ Slot wall-r1-c1, wall-r1-c2, wall-r1-c3
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ [1,0] [1,1] [1,2] │ Slot wall-r2-c1, wall-r2-c2, wall-r2-c3
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ [2,0] [2,1] [2,2] │ Slot wall-r3-c1, wall-r3-c2, wall-r3-c3
|
||||||
|
└─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Slot-Nomenclatur:**
|
||||||
|
|
||||||
|
- `wall-r{reihe}-c{spalte}` (Zeile/Spalte im 0er-System oder 1er-System)
|
||||||
|
- `wall-slot-{nummer}` (durchnummeriert, z.B. wall-slot-0 bis wall-slot-8)
|
||||||
|
|
||||||
|
### Geometrische Definition
|
||||||
|
|
||||||
|
Fuer jeden Slot wird definiert:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"slot_id": "wall-r1-c1",
|
||||||
|
"row": 0,
|
||||||
|
"col": 0,
|
||||||
|
"layout_name": "3x3_grid",
|
||||||
|
"crop_x": 0,
|
||||||
|
"crop_y": 0,
|
||||||
|
"crop_width": 640,
|
||||||
|
"crop_height": 1080,
|
||||||
|
"assigned_screen_id": "info01"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese Werte sind:
|
||||||
|
|
||||||
|
- **serverseitig generiert** — Admin muss nicht manuell Pixel-Koordinaten eingeben
|
||||||
|
- **automatisch skalierbar** — bei verschiedenen Aufloesungen
|
||||||
|
|
||||||
|
## 3. Datenmodell
|
||||||
|
|
||||||
|
### Tabelle `screen_groups`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE screen_groups (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
slug TEXT NOT NULL UNIQUE,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
description TEXT,
|
||||||
|
group_type TEXT NOT NULL CHECK (group_type IN (
|
||||||
|
'physical', 'functional', 'device_type', 'tenant', 'custom'
|
||||||
|
)),
|
||||||
|
parent_group_id UUID REFERENCES screen_groups(id),
|
||||||
|
active BOOLEAN NOT NULL DEFAULT true,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Beispiele:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO screen_groups (slug, name, group_type)
|
||||||
|
VALUES
|
||||||
|
('all', 'Alle Screens', 'custom'),
|
||||||
|
('wall-all', 'Infowand - Alle', 'physical'),
|
||||||
|
('wall-row-1', 'Infowand - Reihe 1', 'physical'),
|
||||||
|
('single-all', 'Einzelanzeigen', 'functional'),
|
||||||
|
('portrait-all', 'Hochformat', 'device_type');
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tabelle `screen_group_members`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE screen_group_members (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
group_id UUID NOT NULL REFERENCES screen_groups(id) ON DELETE CASCADE,
|
||||||
|
screen_id UUID NOT NULL REFERENCES screens(id) ON DELETE CASCADE,
|
||||||
|
added_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
UNIQUE(group_id, screen_id)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Beispiel:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO screen_group_members (group_id, screen_id)
|
||||||
|
SELECT
|
||||||
|
(SELECT id FROM screen_groups WHERE slug = 'wall-row-1'),
|
||||||
|
id
|
||||||
|
FROM screens
|
||||||
|
WHERE slug IN ('info01', 'info02', 'info03');
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tabelle `layout_definitions`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE layout_definitions (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
slug TEXT NOT NULL UNIQUE,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
layout_type TEXT NOT NULL CHECK (layout_type IN (
|
||||||
|
'3x3_grid', '2x2_grid', '1x9_row', '9x1_column', 'custom'
|
||||||
|
)),
|
||||||
|
rows INT NOT NULL,
|
||||||
|
cols INT NOT NULL,
|
||||||
|
description TEXT,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Beispiel:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO layout_definitions (slug, name, layout_type, rows, cols)
|
||||||
|
VALUES ('3x3_infowand', 'Infowand 3x3', '3x3_grid', 3, 3);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tabelle `layout_slots`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE layout_slots (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
layout_id UUID NOT NULL REFERENCES layout_definitions(id) ON DELETE CASCADE,
|
||||||
|
slot_slug TEXT NOT NULL,
|
||||||
|
row INT NOT NULL,
|
||||||
|
col INT NOT NULL,
|
||||||
|
UNIQUE(layout_id, slot_slug)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Beispiel:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO layout_slots (layout_id, slot_slug, row, col)
|
||||||
|
SELECT
|
||||||
|
(SELECT id FROM layout_definitions WHERE slug = '3x3_infowand'),
|
||||||
|
'wall-r' || (r) || '-c' || (c),
|
||||||
|
r - 1, c - 1
|
||||||
|
FROM
|
||||||
|
CROSS JOIN LATERAL (SELECT GENERATE_SERIES(1, 3) AS r)
|
||||||
|
CROSS JOIN LATERAL (SELECT GENERATE_SERIES(1, 3) AS c);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tabelle `slot_screen_assignments`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE slot_screen_assignments (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
layout_id UUID NOT NULL REFERENCES layout_definitions(id),
|
||||||
|
slot_id UUID NOT NULL REFERENCES layout_slots(id) ON DELETE CASCADE,
|
||||||
|
screen_id UUID NOT NULL REFERENCES screens(id),
|
||||||
|
assigned_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
UNIQUE(layout_id, slot_id, screen_id)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Beispiel:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Zuordnung: Slot wall-r1-c1 → Screen info01 (in 3x3-Layout)
|
||||||
|
INSERT INTO slot_screen_assignments (layout_id, slot_id, screen_id)
|
||||||
|
SELECT
|
||||||
|
l.id,
|
||||||
|
ls.id,
|
||||||
|
s.id
|
||||||
|
FROM
|
||||||
|
layout_definitions l,
|
||||||
|
layout_slots ls,
|
||||||
|
screens s
|
||||||
|
WHERE
|
||||||
|
l.slug = '3x3_infowand'
|
||||||
|
AND ls.layout_id = l.id
|
||||||
|
AND ls.slot_slug = 'wall-r1-c1'
|
||||||
|
AND s.slug = 'info01';
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Admin-Verwaltung
|
||||||
|
|
||||||
|
### Gruppen verwalten
|
||||||
|
|
||||||
|
**Seite:** Admin → Gruppen
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Screen-Gruppen │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Gruppe Typ Screens│
|
||||||
|
│────────────────────────────────────────│
|
||||||
|
│ all custom 13 │
|
||||||
|
│ wall-all physical 9 │
|
||||||
|
│ wall-row-1 physical 3 │
|
||||||
|
│ wall-row-2 physical 3 │
|
||||||
|
│ wall-row-3 physical 3 │
|
||||||
|
│ single-all functional 2 │
|
||||||
|
│ portrait-all device_type 12 │
|
||||||
|
│ │
|
||||||
|
│ [+ Neue Gruppe] [Gruppe bearbeiten] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Gruppe erstellen/bearbeiten
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Neue Gruppe │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Name * │
|
||||||
|
│ [ Infowand Reihe 2 __________________ ] │
|
||||||
|
│ slug: wall-row-2 (automatisch) │
|
||||||
|
│ │
|
||||||
|
│ Gruppentyp * │
|
||||||
|
│ ⦿ physical (Wand-Anordnung) │
|
||||||
|
│ ○ functional (Verwendungszweck) │
|
||||||
|
│ ○ device_type (Geraetetyp) │
|
||||||
|
│ ○ tenant (Mandant) │
|
||||||
|
│ ○ custom (benutzerdefiniert) │
|
||||||
|
│ │
|
||||||
|
│ Beschreibung │
|
||||||
|
│ [ Die obere Reihe der Infowand ______ ] │
|
||||||
|
│ │
|
||||||
|
│ Screens hinzufuegen │
|
||||||
|
│ [ Suchfeld: "info" ] │
|
||||||
|
│ □ info01 ← obere Reihe │
|
||||||
|
│ □ info02 ← obere Reihe │
|
||||||
|
│ ☑ info03 ← obere Reihe │
|
||||||
|
│ □ info04 │
|
||||||
|
│ ... (nur unzugeordnete zeigen) │
|
||||||
|
│ │
|
||||||
|
│ Ausgewaehlte Screens │
|
||||||
|
│ info03 (portrait, online) │
|
||||||
|
│ [ + weitere hinzufuegen ] │
|
||||||
|
│ │
|
||||||
|
│ Uebergruppe │
|
||||||
|
│ [Dropdown: all > wall-all] │
|
||||||
|
│ (optional, zur Hierarchie) │
|
||||||
|
│ │
|
||||||
|
│ [Speichern] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Layout-Definition erstellen (fuer Message-Wall)
|
||||||
|
|
||||||
|
**Seite:** Admin → Layouts
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Layout-Definitionen │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Layout-Name Typ Grid Slots│
|
||||||
|
│─────────────────────────────────────────│
|
||||||
|
│ 3x3 Infowand 3x3_grid 3x3 9 │
|
||||||
|
│ Vertretungsplan 2x2_grid 2x2 4 │
|
||||||
|
│ News-Lauf 1x9_row 1x9 9 │
|
||||||
|
│ │
|
||||||
|
│ [+ Neues Layout] [Bearbeiten] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Detailseite eines Layouts:
|
||||||
|
|
||||||
|
```
|
||||||
|
Layout: 3x3 Infowand
|
||||||
|
|
||||||
|
Visualisierung:
|
||||||
|
┌─────────┬─────────┬─────────┐
|
||||||
|
│ Slot 1 │ Slot 2 │ Slot 3 │
|
||||||
|
├─────────┼─────────┼─────────┤
|
||||||
|
│ Slot 4 │ Slot 5 │ Slot 6 │
|
||||||
|
├─────────┼─────────┼─────────┤
|
||||||
|
│ Slot 7 │ Slot 8 │ Slot 9 │
|
||||||
|
└─────────┴─────────┴─────────┘
|
||||||
|
|
||||||
|
Slot-Zuordnungen:
|
||||||
|
Slot 1 (wall-r1-c1) → Screen info01 (portrait, 1920x1080)
|
||||||
|
Slot 2 (wall-r1-c2) → Screen info02 (portrait, 1920x1080)
|
||||||
|
...
|
||||||
|
|
||||||
|
[Screen-Zuordnungen aendernx] [Layout loeschen]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Anwendung in Kampagnen
|
||||||
|
|
||||||
|
### Kampagne auf Gruppe anwenden
|
||||||
|
|
||||||
|
**Beispiel:** Admin aktiviert Weihnachtsmotiv auf `wall-all`:
|
||||||
|
|
||||||
|
```
|
||||||
|
Template: Weihnachtsmotiv 2025 (full_screen_media)
|
||||||
|
|
||||||
|
Zielgruppe auswaehlen:
|
||||||
|
⦿ Alle Screens
|
||||||
|
○ Nach Gruppe:
|
||||||
|
[Dropdown: wall-all ]
|
||||||
|
oder wall-row-1, single-all, ...
|
||||||
|
○ Einzelne Screens
|
||||||
|
|
||||||
|
→ Kampagne wird auf alle 9 Screens in wall-all aktiviert
|
||||||
|
→ Jeder Screen zeigt dasselbe Motiv
|
||||||
|
→ (Portrait/Landscape-Varianten werden serverseitig beruecksichtigt)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Message-Wall-Kampagne mit Slot-Modell
|
||||||
|
|
||||||
|
**Beispiel:** Admin teilt Schriftzug auf Infowand auf:
|
||||||
|
|
||||||
|
```
|
||||||
|
Template: Schriftzug (message_wall)
|
||||||
|
|
||||||
|
Layout: 3x3 Infowand
|
||||||
|
Zielgruppe: wall-all (auto-expandiert zu Slots)
|
||||||
|
|
||||||
|
Gesamte Grafik hochladen oder zeichnen
|
||||||
|
↓
|
||||||
|
System generiert automatisch:
|
||||||
|
- Slot wall-r1-c1 → Ausschnitt x0-640 y0-1080 → Screen info01
|
||||||
|
- Slot wall-r1-c2 → Ausschnitt 640-1280 y0-1080 → Screen info02
|
||||||
|
- Slot wall-r1-c3 → Ausschnitt 1280-1920 y0-1080 → Screen info03
|
||||||
|
- ... (9 Zuweisungen insgesamt)
|
||||||
|
↓
|
||||||
|
Kampagne aktivieren
|
||||||
|
↓
|
||||||
|
Jeder Screen ladet seinen zustaendigen Ausschnitt
|
||||||
|
↓
|
||||||
|
Schriftzug erscheint verteilt ueber alle 9 Screens
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Automatische Gruppe-Inferenz
|
||||||
|
|
||||||
|
Der Server kann bestimmte Gruppen automatisch generieren:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Automatisch generierte Gruppen
|
||||||
|
|
||||||
|
all:
|
||||||
|
- alle Screens im System (manuelle Verwaltung nicht noetig)
|
||||||
|
|
||||||
|
online-all:
|
||||||
|
- alle Screens, die gerade online sind
|
||||||
|
- wird alle 5 Min aktualisiert
|
||||||
|
|
||||||
|
offline-all:
|
||||||
|
- alle Screens, die gerade offline sind
|
||||||
|
|
||||||
|
portrait-all:
|
||||||
|
- alle Screens mit Orientierung = "portrait"
|
||||||
|
|
||||||
|
landscape-all:
|
||||||
|
- alle Screens mit Orientierung = "landscape"
|
||||||
|
|
||||||
|
device_type_*:
|
||||||
|
- fuer jeden konfigurieren Screen-Typ (z.B. device_type_raspberry_pi)
|
||||||
|
|
||||||
|
region_*:
|
||||||
|
- optional: auf Basis von Geo-Daten oder Tags
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese automatischen Gruppen sind **read-only** im Admin-UI, aber voll verwendbar fuer Kampagnen.
|
||||||
|
|
||||||
|
## 7. Beispiel: Neuinstallation einer Infowand
|
||||||
|
|
||||||
|
**Szenario:** Admin installiert neue 3x3-Infowand mit Screens info01-info09.
|
||||||
|
|
||||||
|
**Schritte:**
|
||||||
|
|
||||||
|
1. **Screens anlegen** (via Provisionierungs-UI oder direkt)
|
||||||
|
```
|
||||||
|
info01, info02, ..., info09
|
||||||
|
Alle: Orientierung portrait, Geraetetyp "raspberry_pi"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Gruppen anlegen**
|
||||||
|
```
|
||||||
|
screen_groups:
|
||||||
|
- slug: wall-all, name: "Infowand Alle", type: physical
|
||||||
|
- slug: wall-row-1, name: "Infowand Reihe 1", type: physical
|
||||||
|
- slug: wall-row-2, name: "Infowand Reihe 2", type: physical
|
||||||
|
- slug: wall-row-3, name: "Infowand Reihe 3", type: physical
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Screens den Gruppen zuordnen**
|
||||||
|
```
|
||||||
|
wall-all: info01-info09
|
||||||
|
wall-row-1: info01, info02, info03
|
||||||
|
wall-row-2: info04, info05, info06
|
||||||
|
wall-row-3: info07, info08, info09
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Layout definieren**
|
||||||
|
```
|
||||||
|
layout_definitions:
|
||||||
|
- slug: 3x3_infowand, rows: 3, cols: 3
|
||||||
|
|
||||||
|
layout_slots:
|
||||||
|
- wall-r1-c1, wall-r1-c2, wall-r1-c3 (row 0)
|
||||||
|
- wall-r2-c1, wall-r2-c2, wall-r2-c3 (row 1)
|
||||||
|
- wall-r3-c1, wall-r3-c2, wall-r3-c3 (row 2)
|
||||||
|
|
||||||
|
slot_screen_assignments:
|
||||||
|
- wall-r1-c1 → info01
|
||||||
|
- wall-r1-c2 → info02
|
||||||
|
- ... (9 gesamt)
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Kampagnen verwenden**
|
||||||
|
```
|
||||||
|
Template: Schriftzug
|
||||||
|
Zielgruppe: wall-all
|
||||||
|
Layout: 3x3_infowand
|
||||||
|
→ Kampagne kann sofort aktiviert werden
|
||||||
|
```
|
||||||
|
|
||||||
|
## 8. Zusammenfassung
|
||||||
|
|
||||||
|
Das Gruppierungs- und Slot-Modell:
|
||||||
|
|
||||||
|
- **ist flexibel** — physische, funktionale und typen-basierte Gruppen
|
||||||
|
- **ist hierarchisch** — Gruppen koennen Untergruppen enthalten
|
||||||
|
- **ist automatisch** — Gruppen wie "all" und "online-all" werden inferiert
|
||||||
|
- **ist geometrisch** — Slots definieren Layouts fuer verteilte Motive
|
||||||
|
- **ist skalierbar** — neue Screens werden einfach Gruppen zugeordnet
|
||||||
|
- **ist intuitiv** — Admin-UI zeigt Zuordnungen und Vorschauen
|
||||||
483
docs/KAMPAGNEN-AKTIVIERUNG.md
Normal file
483
docs/KAMPAGNEN-AKTIVIERUNG.md
Normal file
|
|
@ -0,0 +1,483 @@
|
||||||
|
# Info-Board Neu - Aktivierungsoberflaeche fuer saisonale und temporaere Kampagnen
|
||||||
|
|
||||||
|
## Ziel
|
||||||
|
|
||||||
|
Die Aktivierungsoberflaeche ermoeglicht es dem Admin, Kampagnen zeitlich und gezielt auf Screens auszurollen — sofort oder geplant.
|
||||||
|
|
||||||
|
Dieses Dokument beschreibt:
|
||||||
|
|
||||||
|
- die Aktivierungs-Workflows im Admin-UI
|
||||||
|
- zeitgesteuerte Aktivierung (Scheduler)
|
||||||
|
- Screen-Zuordnung und Vorschau
|
||||||
|
- Status und Kontrolle waehrend der Laufzeit
|
||||||
|
|
||||||
|
Siehe auch `docs/TEMPLATE-EDITOR.md` fuer die Template-Verwaltung und `docs/TEMPLATE-KONZEPT.md` fuer konzeptionelle Grundlagen.
|
||||||
|
|
||||||
|
## 1. Aktivierungs-Workflows
|
||||||
|
|
||||||
|
### Workflow 1 — Schnelle Sofort-Aktivierung
|
||||||
|
|
||||||
|
**Szenario:** Admin hat ein Template und will es sofort starten.
|
||||||
|
|
||||||
|
**Weg:**
|
||||||
|
|
||||||
|
Admin → Templates → [Template] → "Aktivieren"
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Kampagne starten: Weihnachtsmotiv 2025 │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Kampagnen-Name (eindeutig) │
|
||||||
|
│ [ Weihnachten 2025 _________________] │
|
||||||
|
│ Vorschau: morz_campaign_xmas2025 │
|
||||||
|
│ │
|
||||||
|
│ Zielgruppe pruefen │
|
||||||
|
│ aus Template: Alle Screens (13) │
|
||||||
|
│ [Gruppe aendernx] [Screens aendernx] │
|
||||||
|
│ │
|
||||||
|
│ Dauer │
|
||||||
|
│ ⦿ Sofort starten │
|
||||||
|
│ gueltig ab jetzt │
|
||||||
|
│ ○ Geplant starten │
|
||||||
|
│ [Datum/Uhrzeit auswaehlen] │
|
||||||
|
│ │
|
||||||
|
│ Gueltig bis │
|
||||||
|
│ [Datum/Uhrzeit auswaehlen] │
|
||||||
|
│ oder [ ] unbegrenzt │
|
||||||
|
│ │
|
||||||
|
│ Prioritaet gegenueber Playlist │
|
||||||
|
│ [10____________] hoeher = wichtiger │
|
||||||
|
│ Standardwert: 1 │
|
||||||
|
│ │
|
||||||
|
│ Auto-Deaktivierung bei Ablauf? │
|
||||||
|
│ ⦿ Ja, danach Fallback zeigen │
|
||||||
|
│ ○ Nein, manuell deaktivieren │
|
||||||
|
│ │
|
||||||
|
│ Vorschau betroffener Screens │
|
||||||
|
│ [Screenshot-Vorschau mit Kampagnen- │
|
||||||
|
│ Inhalt fuer ausgew. Screens] │
|
||||||
|
│ │
|
||||||
|
│ [Aktivieren] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Aktion:**
|
||||||
|
|
||||||
|
- Server speichert Kampagne mit `active = true`, `valid_from = NOW()`
|
||||||
|
- Server expandiert Zielgruppe in konkrete Screens
|
||||||
|
- Alle betroffenen Screens erhalten MQTT-Signal `playlist-changed` (obwohl Playlist gleich, aber Kampagnen-Prioritaet aendert sich)
|
||||||
|
- Screens synchonisieren und laden neue Kampagnen-Inhalte
|
||||||
|
|
||||||
|
### Workflow 2 — Geplante Aktivierung
|
||||||
|
|
||||||
|
**Szenario:** Admin bereitet eine Kampagne vor, soll aber erst am naechsten Tag 8:00 Uhr starten.
|
||||||
|
|
||||||
|
**Weg:**
|
||||||
|
|
||||||
|
Admin → Templates → [Template] → "Aktivieren" → "Geplant starten"
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Geplante Aktivierung: Ostern 2025 │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Kampagnen-Name │
|
||||||
|
│ [ Ostern_Dekoration_2025 ____________ ] │
|
||||||
|
│ │
|
||||||
|
│ Startdatum und -uhrzeit │
|
||||||
|
│ [2025-04-14] [08:00] [Kalender/Uhr] │
|
||||||
|
│ │
|
||||||
|
│ Enddatum und -uhrzeit (optional) │
|
||||||
|
│ [2025-04-21] [20:00] [Kalender/Uhr] │
|
||||||
|
│ oder [ ] Kein Enddatum │
|
||||||
|
│ │
|
||||||
|
│ Prioritaet │
|
||||||
|
│ [1_____________] │
|
||||||
|
│ │
|
||||||
|
│ Auto-Deaktivierung? │
|
||||||
|
│ ⦿ Ja │
|
||||||
|
│ ○ Nein │
|
||||||
|
│ │
|
||||||
|
│ Status │
|
||||||
|
│ ◯ GEPLANT — wird am 2025-04-14 08:00 │
|
||||||
|
│ aktiviert │
|
||||||
|
│ │
|
||||||
|
│ Erinnerung setzen (optional) │
|
||||||
|
│ [ ] Erinnerungs-Email 1 Tag vorher │
|
||||||
|
│ [ ] Erinnerungs-Email 1 Stunde vorher │
|
||||||
|
│ │
|
||||||
|
│ [Planen & Speichern] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Aktion:**
|
||||||
|
|
||||||
|
- Server speichert Kampagne mit `active = false`, `valid_from = 2025-04-14 08:00`
|
||||||
|
- Server erstellt inneren Scheduler-Job
|
||||||
|
- Admin sieht Kampagne in Liste mit Status "GEPLANT"
|
||||||
|
- Um geplanten Zeitpunkt:
|
||||||
|
- Scheduler setzt `campaigns.active = true`
|
||||||
|
- MQTT-Signal an alle betroffenen Screens
|
||||||
|
- Optionale Erinnerungs-Email an Admin
|
||||||
|
|
||||||
|
### Workflow 3 — Schnelle Deaktivierung
|
||||||
|
|
||||||
|
**Szenario:** Kampagne laeuft, Admin will sie sofort stoppen.
|
||||||
|
|
||||||
|
**Weg:**
|
||||||
|
|
||||||
|
Admin → Kampagnen → [laufende Kampagne] → "Deaktivieren"
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Kampagne deaktivieren? │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Kampagne: Weihnachten 2025 │
|
||||||
|
│ Status: AKTIV seit 2025-12-01 09:00 │
|
||||||
|
│ Betroffene Screens: 13 │
|
||||||
|
│ │
|
||||||
|
│ Aktion: │
|
||||||
|
│ ⦿ Sofort deaktivieren │
|
||||||
|
│ Screens zeigen danach wieder │
|
||||||
|
│ Tenant-Playlist oder Fallback │
|
||||||
|
│ │
|
||||||
|
│ ○ Mit Verzoegerung (Fade-Out) │
|
||||||
|
│ [2 Min] [5 Min] [Uhr auswaehlen] │
|
||||||
|
│ Nuetzlich: Licht dimmen, Musik leiser │
|
||||||
|
│ etc. vor Inhalt-Wechsel │
|
||||||
|
│ │
|
||||||
|
│ [Ja, deaktivieren] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Aktion:**
|
||||||
|
|
||||||
|
- Server setzt `campaigns.active = false`
|
||||||
|
- Server sendet MQTT-Signal an Screens
|
||||||
|
- Screens wechseln sofort (oder mit Verzoegerung) zu Fallback/Playlist
|
||||||
|
- Kampagne verschwindet aus "Aktive Kampagnen"-Liste
|
||||||
|
|
||||||
|
## 2. Zeitplanung und Scheduler
|
||||||
|
|
||||||
|
### Automatisierte Scheduler-Jobs
|
||||||
|
|
||||||
|
Der Server laeuft einen einfachen Scheduler als Goroutine oder als separaten Service.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Pseudocode
|
||||||
|
type CampaignScheduler interface {
|
||||||
|
RegisterJob(campaignID, activateAt, deactivateAt time.Time)
|
||||||
|
RunScheduler(ctx context.Context)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Beim Starten
|
||||||
|
func init() {
|
||||||
|
scheduler := NewCampaignScheduler()
|
||||||
|
go scheduler.RunScheduler(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Im Hintergrund
|
||||||
|
func (s *CampaignScheduler) RunScheduler(ctx context.Context) {
|
||||||
|
ticker := time.NewTicker(1 * time.Minute)
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
// Checke alle geplanten Kampagnen
|
||||||
|
campaigns := db.GetScheduledCampaigns()
|
||||||
|
for _, c := range campaigns {
|
||||||
|
if time.Now() >= c.ValidFrom && !c.Active {
|
||||||
|
// Aktiviere die Kampagne
|
||||||
|
s.ActivateCampaign(c.ID)
|
||||||
|
}
|
||||||
|
if c.ValidUntil != nil && time.Now() >= *c.ValidUntil && c.Active {
|
||||||
|
// Deaktiviere die Kampagne
|
||||||
|
s.DeactivateCampaign(c.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Persistenz ueber Restart
|
||||||
|
|
||||||
|
Scheduler-Jobs werden in der Datenbank gespeichert (Spalten `valid_from`, `valid_until`, `active` in `campaigns`-Tabelle).
|
||||||
|
|
||||||
|
Beim Neustart des Servers:
|
||||||
|
|
||||||
|
1. Server laedt alle geplanten/aktiven Kampagnen
|
||||||
|
2. Scheduler prueft bei jedem Takt (1 Min), ob eine Aktivierung/Deaktivierung faellig ist
|
||||||
|
3. Kein Datenverlust, kein komplexes Job-Persisting noetig
|
||||||
|
|
||||||
|
### Erinnerungen und Notifications
|
||||||
|
|
||||||
|
**Optional (Phase 2):**
|
||||||
|
|
||||||
|
- Email-Erinnerung N Stunden vor Aktivierung
|
||||||
|
- Webhook-Notification fuer externe Systeme
|
||||||
|
- In-App-Benachrichtigung im Admin-Dashboard
|
||||||
|
|
||||||
|
## 3. Screen-Zuordnung und Vorschau
|
||||||
|
|
||||||
|
### Interaktive Zielgruppen-Auswahl
|
||||||
|
|
||||||
|
Waehrend der Kampagnen-Erstellung kann der Admin entscheiden, welche Screens betroffen sein sollen.
|
||||||
|
|
||||||
|
```
|
||||||
|
Zielgruppe
|
||||||
|
⦿ Alle Screens
|
||||||
|
○ Nach Gruppe auswaehlen:
|
||||||
|
□ wall-all (9 Screens)
|
||||||
|
□ single-info (2 Screens)
|
||||||
|
□ vertretungsplan-all (2 Screens)
|
||||||
|
○ Einzelne Screens:
|
||||||
|
[ Suchfeld: "info" ]
|
||||||
|
□ info01 (portrait)
|
||||||
|
□ info02 (portrait)
|
||||||
|
☑ info03 (portrait)
|
||||||
|
□ info04 (portrait)
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rendering-Vorschau
|
||||||
|
|
||||||
|
Admin sieht, wie die Kampagne auf verschiedenen Zielscreens aussieht:
|
||||||
|
|
||||||
|
```
|
||||||
|
Betroffene Screens: 4 ausgew.
|
||||||
|
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ info01 (portrait, 1920x1080) │
|
||||||
|
│ ┌────────────────────────────────┐ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ [Kampagnen-Inhalt: Bild] │ │
|
||||||
|
│ │ (Portrait-Assets verwendet) │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ └────────────────────────────────┘ │
|
||||||
|
└─────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ info05 (landscape, 2560x1440) │
|
||||||
|
│ ┌────────────────────────────────┐ │
|
||||||
|
│ │ [Kampagnen-Inhalt: Bild] │ │
|
||||||
|
│ │ (Landscape-Assets verwendet) │ │
|
||||||
|
│ └────────────────────────────────┘ │
|
||||||
|
└─────────────────────────────────────┘
|
||||||
|
|
||||||
|
[Scrollen um weitere Screens zu sehen]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Live-Uebersicht waehrend Laufzeit
|
||||||
|
|
||||||
|
Wenn eine Kampagne aktiv ist, zeigt das Admin-Dashboard:
|
||||||
|
|
||||||
|
```
|
||||||
|
Kampagne: Weihnachten 2025 einfuehrung
|
||||||
|
Status: AKTIV seit 2025-12-01 09:00
|
||||||
|
|
||||||
|
Betroffene Screens: 13
|
||||||
|
✓ Aktiv angezeigt: 11 (info01-info08, info10, info11, info13)
|
||||||
|
◯ Wartet auf Sync: 1 (info09)
|
||||||
|
✗ Offline: 1 (info12)
|
||||||
|
|
||||||
|
Zuletzt geprueft: vor 30 Sekunden
|
||||||
|
|
||||||
|
[Aktualisieren] [Deaktivieren] [Bearbeiten]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Kampagnen-Verwaltung waehrend Laufzeit
|
||||||
|
|
||||||
|
### Aktive Kampagnen — Haupt-Dashboard
|
||||||
|
|
||||||
|
**Seite:** Admin → Aktive Kampagnen (oder Campaigns)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────┐
|
||||||
|
│ Aktive Kampagnen │
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Weihnachten 2025 einfuehrung │ ▼
|
||||||
|
│ Template: Weihnachtsmotiv 2025 │
|
||||||
|
│ Aktiv seit: 2025-12-01 09:00 │
|
||||||
|
│ Aktiv bis: 2025-12-26 23:59 │
|
||||||
|
│ Betroffene: 13 Screens │
|
||||||
|
│ Status: ✓ Auf allen Screens ok │
|
||||||
|
│ │
|
||||||
|
│ [Vorschau] [Bearbeiten] │
|
||||||
|
│ [Deaktivieren] │
|
||||||
|
│ │
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Event-Tag 25.03 │
|
||||||
|
│ Template: screen_specific_scene │
|
||||||
|
│ Aktiv seit: 2025-03-25 00:00 │
|
||||||
|
│ Aktiv bis: 2025-03-25 23:59 │
|
||||||
|
│ Betroffene: 4 Screens │
|
||||||
|
│ Status: ◯ 1 Screen offline │
|
||||||
|
│ │
|
||||||
|
│ [Vorschau] [Bearbeiten] │
|
||||||
|
│ [Deaktivieren] │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Geplante Kampagnen
|
||||||
|
|
||||||
|
**Seite:** Admin → Kampagnen (Alle)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────┐
|
||||||
|
│ Geplante Kampagnen │
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Ostern-Dekoration 2025 │ ▼
|
||||||
|
│ Template: full_screen_media │
|
||||||
|
│ Status: GEPLANT │
|
||||||
|
│ Startet: 2025-04-14 08:00 │
|
||||||
|
│ Endet: 2025-04-21 20:00 │
|
||||||
|
│ Betroffene: 13 Screens │
|
||||||
|
│ Erinnerung: 1 Tag vorher │
|
||||||
|
│ │
|
||||||
|
│ [Vorschau] [Bearbeiten] │
|
||||||
|
│ [Jetzt aktivieren] [Loeschen] │
|
||||||
|
│ │
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Sommer-Kampagne │
|
||||||
|
│ Status: GEPLANT │
|
||||||
|
│ Startet: 2025-06-01 00:00 │
|
||||||
|
│ │
|
||||||
|
│ ... │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Abgelaufene Kampagnen
|
||||||
|
|
||||||
|
**Seite:** Admin → Kampagnen (Archiv)
|
||||||
|
|
||||||
|
```
|
||||||
|
Zeigt inaktive/abgelaufene Kampagnen fuer Audit-Trail.
|
||||||
|
|
||||||
|
[ Kampagne ] Zeitraum Status
|
||||||
|
Ostern 2025 2025-04-14—04-21 Auto-Deaktiviert
|
||||||
|
Karneval 2025-02-28—03-05 Manuell deaktiviert
|
||||||
|
Valentinstag 2025-02-14 Auto-Deaktiviert
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Prioritaetsverwaltung
|
||||||
|
|
||||||
|
### Prio-Einstellung pro Kampagne
|
||||||
|
|
||||||
|
```
|
||||||
|
Prioritaet gegenueber Tenant-Playlist
|
||||||
|
┌─────────────────────────────────┐
|
||||||
|
│ Schieber oder Zahlenfeld │
|
||||||
|
│ │
|
||||||
|
│ [|━━━━━━━━━━━| ] 10 │
|
||||||
|
│ 1 5 10 100 │
|
||||||
|
│ │
|
||||||
|
│ Bedeutung: │
|
||||||
|
│ 1 = normale Kampagne │
|
||||||
|
│ 10 = hohe Prioritaet (Standard) │
|
||||||
|
│ 100 = Notfall / absolut wichtig │
|
||||||
|
│ │
|
||||||
|
│ Diese Prioritaet wird ueber │
|
||||||
|
│ alle Tenant-Playlists gestellt │
|
||||||
|
│ (falls mehrere Kampagnen) │
|
||||||
|
│ verwendet die mit hoechster │
|
||||||
|
│ Prioritaet │
|
||||||
|
└─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Konflikt-Management (mehrere Kampagnen gleichzeitig)
|
||||||
|
|
||||||
|
Falls mehrere Kampagnen fuer denselben Screen aktiv sind:
|
||||||
|
|
||||||
|
1. Sortierende nach Prioritaet (hoechste gewinnt)
|
||||||
|
2. Bei gleicher Prioritaet: nach Start-Zeitstempel (neueste gewinnt)
|
||||||
|
3. Admin sieht im Status-Dashboard einen Warning: "2 Kampagnen fuer info01 aktiv"
|
||||||
|
|
||||||
|
Empfehlung: Admin sollte Zeitraeume von Kampagnen nicht ueberlappen lassen.
|
||||||
|
|
||||||
|
## 6. Fehlerbehandlung
|
||||||
|
|
||||||
|
### Was, wenn ein Screen offline ist?
|
||||||
|
|
||||||
|
```
|
||||||
|
Kampagne wird aktiviert, aber Screen info03 ist gerade offline:
|
||||||
|
|
||||||
|
1. Server weiss, dass info03 Ziel der Kampagne ist
|
||||||
|
2. Server loggt: "Kampagne XYZ kann nicht auf info03 ausgeliefert werden (offline)"
|
||||||
|
3. Info03 hat letzte gueltige Kampagne gecacht
|
||||||
|
4. Sobald info03 wieder online kommt:
|
||||||
|
- Player synchonisiert
|
||||||
|
- Server sagt: "Kampagne XYZ ist aktiv"
|
||||||
|
- Player ladet und rendert
|
||||||
|
5. Status im Dashboard: "info03 — Offline, wird synchronisiert sobald online"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rollback bei fehlgeschlagener Aktivierung
|
||||||
|
|
||||||
|
Falls eine Kampagne fehlerhaft ist (kaputtes Video, Renderingfehler):
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Screen zeigt Fehler-Overlay
|
||||||
|
2. Admin ist informiert (Status-API zeigt Fehler)
|
||||||
|
3. Admin Aktion 1: Template korrigieren
|
||||||
|
- Fehlerhaftes Asset austauschen
|
||||||
|
- Kampagne aktualisieren
|
||||||
|
- Screens neu synchonisieren
|
||||||
|
4. Admin Aktion 2: Schnelle Deaktivierung
|
||||||
|
- Kampagne abschalten
|
||||||
|
- Fallback/Playlist kehrt zurueck
|
||||||
|
```
|
||||||
|
|
||||||
|
## 7. Datenschutz und Audit
|
||||||
|
|
||||||
|
### Audit-Trail
|
||||||
|
|
||||||
|
Alle Kampagnen-Aenderungen werden protokolliert:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-25T14:22:00Z",
|
||||||
|
"event": "campaign_activated",
|
||||||
|
"campaign_id": "uuid-...",
|
||||||
|
"campaign_name": "Ostern-Dekoration",
|
||||||
|
"triggered_by_user_id": "admin123",
|
||||||
|
"triggered_by_email": "admin@example.com",
|
||||||
|
"details": {
|
||||||
|
"valid_from": "2025-04-14T08:00:00Z",
|
||||||
|
"valid_until": "2025-04-21T20:00:00Z",
|
||||||
|
"target_screens_count": 13
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese Logs sind fuer Compliance und Forensik wichtig.
|
||||||
|
|
||||||
|
### Sichtbarkeitsbeschraenkung
|
||||||
|
|
||||||
|
Nur Benutzer mit Admin-Rolle koennen:
|
||||||
|
|
||||||
|
- Kampagnen erstellen/aendernx
|
||||||
|
- Templates bearbeiten
|
||||||
|
- Aktivierung planen
|
||||||
|
|
||||||
|
Tenant-User sehen keine Kampagnen-Verwaltung.
|
||||||
|
|
||||||
|
## 8. Zusammenfassung
|
||||||
|
|
||||||
|
Die Aktivierungsoberflaeche:
|
||||||
|
|
||||||
|
- **ist einsteigerfreundlich** — Multi-Step Formulare mit Vorschau
|
||||||
|
- **unterstuetzt Sofort und Planung** — spontan oder Wochen im Voraus
|
||||||
|
- **ist sichtbar** — Live-Status und Fehler-Reporting
|
||||||
|
- **ist automatisiert** — Scheduler kuemmert sich um Auf-/Abschalten
|
||||||
|
- **ist sicher** — Audit-Trail und Rollback-Moeglichkeiten
|
||||||
|
- **ist robust** — Offline-Screens werden spaeter synchronisiert
|
||||||
470
docs/MONITORING-KONZEPT.md
Normal file
470
docs/MONITORING-KONZEPT.md
Normal file
|
|
@ -0,0 +1,470 @@
|
||||||
|
# Info-Board Neu - Logging- und Monitoring-Konzept
|
||||||
|
|
||||||
|
## Ziel
|
||||||
|
|
||||||
|
Logging und Monitoring geben dem Betriebsteam vollstaendige Transparenz ueber:
|
||||||
|
|
||||||
|
- Verhalten und Fehler auf dem Player
|
||||||
|
- Verhalten und Fehler auf dem Server
|
||||||
|
- Health-Status aller Screens
|
||||||
|
- Netzwerk- und Synchronisierungsprobleme
|
||||||
|
- Kapazitaetsauslastung und Trends
|
||||||
|
|
||||||
|
Das Konzept muss robust gegen Speicherplatz-Engpaesse auf dem Raspberry Pi arbeiten und zentralisiert auf dem Server auswertbar sein.
|
||||||
|
|
||||||
|
## Logging-Architektur
|
||||||
|
|
||||||
|
### Allgemeine Prinzipien
|
||||||
|
|
||||||
|
- **strukturiertes JSON-Logging** — nicht Freitextloggen, sondern strukturierte Felder
|
||||||
|
- **Log-Levels**: `debug`, `info`, `warn`, `error`, `fatal`
|
||||||
|
- **Zentrale Auswertung** — Player loggen lokal und senden auch an Server
|
||||||
|
- **Rotation und Bereinigung** — lokale Logs werden rotiert und komprimiert
|
||||||
|
- **Datenschutz** — keine sensiblen Inhalte (Passwoerter, API-Keys) ins Log
|
||||||
|
|
||||||
|
### Komponenten und ihre Logs
|
||||||
|
|
||||||
|
## 1. Player-Logs
|
||||||
|
|
||||||
|
### Player-Agent
|
||||||
|
|
||||||
|
Der Agent protokolliert:
|
||||||
|
|
||||||
|
- **Startup/Shutdown**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:00Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "agent",
|
||||||
|
"event": "startup",
|
||||||
|
"config_file": "/etc/signage/config.yml",
|
||||||
|
"screen_id": "info01"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Server-Sync**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:05Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "agent.sync",
|
||||||
|
"event": "sync_complete",
|
||||||
|
"duration_ms": 342,
|
||||||
|
"items_synced": 15,
|
||||||
|
"bytes_downloaded": 4521000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **MQTT-Ereignisse**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:10Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "agent.mqtt",
|
||||||
|
"event": "playlist_changed",
|
||||||
|
"source": "mqtt",
|
||||||
|
"cause": "playlist-changed-event"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Fehler**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:15Z",
|
||||||
|
"level": "error",
|
||||||
|
"component": "agent.cache",
|
||||||
|
"event": "download_failed",
|
||||||
|
"media_id": "abc123",
|
||||||
|
"url": "https://cdn.example.com/video.mp4",
|
||||||
|
"error": "connection_timeout",
|
||||||
|
"retry_count": 2
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Watchdog-Ereignisse** (siehe WATCHDOG-KONZEPT.md)
|
||||||
|
|
||||||
|
### Player-UI
|
||||||
|
|
||||||
|
Die lokale Web-App protokolliert:
|
||||||
|
|
||||||
|
- **Item-Wechsel**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:23:00Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "ui",
|
||||||
|
"event": "item_change",
|
||||||
|
"previous_item": "img-001",
|
||||||
|
"current_item": "video-002",
|
||||||
|
"source": "campaign"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Rendering-Fehler**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:23:05Z",
|
||||||
|
"level": "warn",
|
||||||
|
"component": "ui.renderer",
|
||||||
|
"event": "render_failed",
|
||||||
|
"item_id": "url-003",
|
||||||
|
"media_type": "webpage",
|
||||||
|
"error": "load_timeout",
|
||||||
|
"timeout_ms": 10000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Overlay-Status-Aenderungen**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:23:10Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "ui.overlay",
|
||||||
|
"event": "status_change",
|
||||||
|
"old_status": "online",
|
||||||
|
"new_status": "offline",
|
||||||
|
"reason": "broker_connection_lost"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Chromium
|
||||||
|
|
||||||
|
Der Browser ist schwer zu loggable, aber systemd journal erfasst:
|
||||||
|
|
||||||
|
- Startup und Argumente
|
||||||
|
- Crash-Meldungen
|
||||||
|
- Fehlerrückmeldungen bei Seitenladefehler
|
||||||
|
|
||||||
|
## 2. Server-Logs
|
||||||
|
|
||||||
|
### Backend-API
|
||||||
|
|
||||||
|
Der Server protokolliert:
|
||||||
|
|
||||||
|
- **HTTP-Requests** (strukturiert, nicht kompletter Request-Body)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:20Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "server.http",
|
||||||
|
"method": "POST",
|
||||||
|
"path": "/api/v1/screens/info01/playlist",
|
||||||
|
"status": 200,
|
||||||
|
"duration_ms": 34,
|
||||||
|
"user_id": "admin123",
|
||||||
|
"tenant_id": "tenant01"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Datenbank-Operationen** (nur bei Debug-Level)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:25Z",
|
||||||
|
"level": "debug",
|
||||||
|
"component": "server.db",
|
||||||
|
"query": "UPDATE playlists SET updated_at = NOW() WHERE screen_id = $1",
|
||||||
|
"duration_ms": 5,
|
||||||
|
"rows_affected": 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Fehler und Exceptions**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:30Z",
|
||||||
|
"level": "error",
|
||||||
|
"component": "server.api",
|
||||||
|
"event": "media_download_failed",
|
||||||
|
"media_id": "abc123",
|
||||||
|
"reason": "storage_quota_exceeded",
|
||||||
|
"available_bytes": 1024000,
|
||||||
|
"required_bytes": 50000000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Admin-Kommandos**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:35Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "server.command",
|
||||||
|
"event": "command_sent",
|
||||||
|
"command_type": "restart_player",
|
||||||
|
"target_screen": "info01",
|
||||||
|
"triggered_by_user": "admin123"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Provisionierungs-Worker
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:40Z",
|
||||||
|
"level": "info",
|
||||||
|
"component": "server.provision",
|
||||||
|
"event": "provision_started",
|
||||||
|
"screen_id": "new_display_01",
|
||||||
|
"target_ip": "192.168.1.50",
|
||||||
|
"ansible_playbook": "site.yml"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Log-Format und Ausgabe
|
||||||
|
|
||||||
|
### Struktur
|
||||||
|
|
||||||
|
Alle Logs folgen diesem Schema:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:00Z", // ISO 8601, UTC
|
||||||
|
"level": "info|warn|error|debug",
|
||||||
|
"component": "agent|ui|server.api|server.db|server.mqtt",
|
||||||
|
"event": "descriptive_name",
|
||||||
|
"screen_id": "info01", // nur auf Player relevant
|
||||||
|
"tenant_id": "tenant01", // nur auf Server relevant
|
||||||
|
"user_id": "user123", // nur auf Server bei Auth-Events
|
||||||
|
"duration_ms": 342, // bei Performance-Events
|
||||||
|
|
||||||
|
// Fehler-spezifische Felder
|
||||||
|
"error": "error_code",
|
||||||
|
"error_message": "readable error",
|
||||||
|
|
||||||
|
// Domain-spezifische Felder
|
||||||
|
"item_id": "...",
|
||||||
|
"media_type": "image|video|pdf|webpage",
|
||||||
|
"source": "campaign|tenant_playlist|fallback",
|
||||||
|
|
||||||
|
// Sonstige beliebige Felder
|
||||||
|
"details": { ... }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ausgabeziele
|
||||||
|
|
||||||
|
#### Auf dem Player
|
||||||
|
|
||||||
|
1. **stdout/stderr** mit `log/slog` JSON-Formatter
|
||||||
|
- erfasst von systemd journal
|
||||||
|
- abrufbar via `journalctl`
|
||||||
|
|
||||||
|
2. **Lokale Datei** `/var/log/signage/player.log`
|
||||||
|
- JSON, eine Zeile pro Event
|
||||||
|
- Rotation auf 100 MB, 10 Archive
|
||||||
|
|
||||||
|
3. **Schnelle Fehler** an Server via HTTP-POST
|
||||||
|
- `POST /api/v1/screens/{screenSlug}/log-event`
|
||||||
|
- asynchron, Fehler bei Offline ignoriert
|
||||||
|
- nur `error` und `fatal` Events
|
||||||
|
|
||||||
|
#### Auf dem Server
|
||||||
|
|
||||||
|
1. **stdout/stderr** mit strukturiertem Logging
|
||||||
|
- erfasst von Docker/systemd
|
||||||
|
- abrufbar via `docker logs` oder `journalctl`
|
||||||
|
|
||||||
|
2. **PostgreSQL** (Phase 2+)
|
||||||
|
- wichtige Fehler und Status-Events in Tabelle `logs`
|
||||||
|
- Abfrage-UI im Admin-Dashboard
|
||||||
|
|
||||||
|
3. **Dateispeicher** (Docker Volume)
|
||||||
|
- `/var/log/signage/server.log`
|
||||||
|
- Rotation und Verdichtung durch Container-Orchester
|
||||||
|
|
||||||
|
## Log-Level-Strategie
|
||||||
|
|
||||||
|
### Debug (development)
|
||||||
|
|
||||||
|
- SQL-Queries
|
||||||
|
- HTTP-Request-Details
|
||||||
|
- interner State-Uebergaenge
|
||||||
|
|
||||||
|
Bei Production: `--log-level warn` oder `--log-level info`
|
||||||
|
|
||||||
|
### Info (standard)
|
||||||
|
|
||||||
|
- Startup/Shutdown
|
||||||
|
- erfolgreiche Operationen
|
||||||
|
- Status-Wechsel
|
||||||
|
- Synchronisierungsereignisse
|
||||||
|
|
||||||
|
### Warn (aufmerksamkeit)
|
||||||
|
|
||||||
|
- Timeouts
|
||||||
|
- Retry-Versuche
|
||||||
|
- deprecierte APIs
|
||||||
|
- suboptimale Performance
|
||||||
|
|
||||||
|
### Error (problematisch)
|
||||||
|
|
||||||
|
- gescheiterte HTTP-Requests
|
||||||
|
- Datenbankfehler
|
||||||
|
- fehlende Ressourcen
|
||||||
|
- Auth-Fehler
|
||||||
|
|
||||||
|
### Fatal (kritisch)
|
||||||
|
|
||||||
|
- nicht-wiederherstellbare Fehler
|
||||||
|
- Prozess beendet sich danach
|
||||||
|
|
||||||
|
## Monitoring-Metriken
|
||||||
|
|
||||||
|
### Player-seitig
|
||||||
|
|
||||||
|
Metriken, die der Agent periodisch dem Server meldet:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"screen_id": "info01",
|
||||||
|
"ts": "2025-03-23T14:25:00Z",
|
||||||
|
"heartbeat": {
|
||||||
|
"uptime_seconds": 86400,
|
||||||
|
"last_sync_at": "2025-03-23T14:24:55Z",
|
||||||
|
"seconds_since_last_sync": 5,
|
||||||
|
"sync_status": "ok|failed|pending",
|
||||||
|
"sync_fail_count_24h": 0
|
||||||
|
},
|
||||||
|
"resources": {
|
||||||
|
"cpu_percent": 25,
|
||||||
|
"memory_percent": 45,
|
||||||
|
"disk_free_mb": 2048,
|
||||||
|
"disk_used_percent": 35
|
||||||
|
},
|
||||||
|
"network": {
|
||||||
|
"broker_connected": true,
|
||||||
|
"server_reachable": true,
|
||||||
|
"ip_addresses": ["192.168.1.10"],
|
||||||
|
"signal_strength_dbm": -55
|
||||||
|
},
|
||||||
|
"playback": {
|
||||||
|
"current_item_id": "img-001",
|
||||||
|
"source": "campaign",
|
||||||
|
"rendering_status": "ok",
|
||||||
|
"seconds_on_current_item": 23
|
||||||
|
},
|
||||||
|
"errors_last_hour": [
|
||||||
|
{
|
||||||
|
"event": "download_failed",
|
||||||
|
"media_id": "video-999",
|
||||||
|
"count": 2
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Uebertragung:** HTTP `POST /api/v1/screens/{screenSlug}/heartbeat` alle 60 Sekunden
|
||||||
|
|
||||||
|
### Server-seitig
|
||||||
|
|
||||||
|
Der Server sammelt und ueberwacht:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"screen_id": "info01",
|
||||||
|
"status": "online|offline|degraded|error",
|
||||||
|
"last_heartbeat_at": "2025-03-23T14:25:00Z",
|
||||||
|
"seconds_since_last_heartbeat": 0,
|
||||||
|
"heartbeat_interval_sec": 60,
|
||||||
|
"offline_since_sec": null,
|
||||||
|
|
||||||
|
"screenshot": {
|
||||||
|
"latest_at": "2025-03-23T14:25:00Z",
|
||||||
|
"seconds_since_latest": 0
|
||||||
|
},
|
||||||
|
|
||||||
|
"sync": {
|
||||||
|
"latest_at": "2025-03-23T14:24:55Z",
|
||||||
|
"latest_duration_ms": 342,
|
||||||
|
"fail_count_24h": 1,
|
||||||
|
"last_error": null
|
||||||
|
},
|
||||||
|
|
||||||
|
"content": {
|
||||||
|
"current_item": "img-001",
|
||||||
|
"source": "campaign",
|
||||||
|
"campaign_id": "xmas-2025"
|
||||||
|
},
|
||||||
|
|
||||||
|
"performance": {
|
||||||
|
"cpu_avg_percent_1h": 22,
|
||||||
|
"memory_avg_percent_1h": 44,
|
||||||
|
"disk_free_mb": 2048
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese Metriken werden in PostgreSQL gespeichert und bilden Basis fuer:
|
||||||
|
|
||||||
|
- Status-Dashboard
|
||||||
|
- Alerts
|
||||||
|
- Trend-Analysen
|
||||||
|
- Kapazitaetsplanung
|
||||||
|
|
||||||
|
## Log-Rotation auf dem Player
|
||||||
|
|
||||||
|
Der Raspberry Pi hat begrenzte Speicherkapazitaet. Log-Rotation muss aggressiv sein:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# /etc/logrotate.d/signage
|
||||||
|
|
||||||
|
/var/log/signage/player.log
|
||||||
|
{
|
||||||
|
size 50M
|
||||||
|
rotate 5
|
||||||
|
compress
|
||||||
|
delaycompress
|
||||||
|
missingok
|
||||||
|
notifempty
|
||||||
|
create 0644 root root
|
||||||
|
postrotate
|
||||||
|
systemctl reload signage-agent.service || true
|
||||||
|
endscript
|
||||||
|
}
|
||||||
|
|
||||||
|
/var/log/signage/watchdog.log
|
||||||
|
{
|
||||||
|
size 20M
|
||||||
|
rotate 3
|
||||||
|
compress
|
||||||
|
delaycompress
|
||||||
|
missingok
|
||||||
|
notifempty
|
||||||
|
create 0644 root root
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Resultat:
|
||||||
|
- `player.log`: max 50 MB * 5 = 250 MB
|
||||||
|
- `watchdog.log`: max 20 MB * 3 = 60 MB
|
||||||
|
- Komprimierung von alten Logs auf ~10% der urspruenglichen Groesse
|
||||||
|
|
||||||
|
## Alerting-Strategie
|
||||||
|
|
||||||
|
### Kriterien fuer Alerts
|
||||||
|
|
||||||
|
| Bedingung | Severity | Aktion |
|
||||||
|
|---|---|---|
|
||||||
|
| Screen offline > 15 min | High | Email + Dashboard-Alert |
|
||||||
|
| Screen offline > 2h | Critical | Email + SMS |
|
||||||
|
| Sync-Fehlerquote > 50% in 1h | Medium | Email |
|
||||||
|
| Disk Full auf Player | Critical | Email + Stop-Recording |
|
||||||
|
| CPU > 90% fuer 5 min | Medium | Warnung + Analysis |
|
||||||
|
| Provisioning fehlgeschlagen | High | Email an Provisioner |
|
||||||
|
|
||||||
|
### Alert-Kanal (Phase 2)
|
||||||
|
|
||||||
|
1. **Dashboard-Benachrichtigungen** (im Admin-UI sichtbar)
|
||||||
|
2. **Email** an konfigurierte Admin-Adressen
|
||||||
|
3. **Webhook** fuer externe Monitoring-Systeme (Zabbix, Grafana)
|
||||||
|
4. **Server-API** `/api/v1/admin/alerts` fuer Polling
|
||||||
|
|
||||||
|
## Zusammenfassung
|
||||||
|
|
||||||
|
Das Logging- und Monitoring-Konzept:
|
||||||
|
|
||||||
|
- **ist strukturiert** — JSON, nicht Freitexte
|
||||||
|
- **ist verteilt** — lokal auf Player + zentral auf Server
|
||||||
|
- **ist speicherbewusst** — Rotation und Kompression
|
||||||
|
- **gibt Ueberblick** — Heartbeat + Metriken fuer jeden Screen
|
||||||
|
- **ermoeglicht Diagnose** — detaillierte Logs im Fehlerfall
|
||||||
|
- **skaliert** — Verfahren gilt fuer beliebig viele Player
|
||||||
610
docs/PROVISION-KONZEPT.md
Normal file
610
docs/PROVISION-KONZEPT.md
Normal file
|
|
@ -0,0 +1,610 @@
|
||||||
|
# Info-Board Neu - Jobrunner-Konzept fuer Ansible-gestuetzte Erstinstallation
|
||||||
|
|
||||||
|
## Ziel
|
||||||
|
|
||||||
|
Der Jobrunner fuehrt aus dem Admin-Backend heraus Provisionierungsjobs aus, die ein neues Display technisch in Betrieb nehmen.
|
||||||
|
|
||||||
|
Dieses Dokument beschreibt:
|
||||||
|
|
||||||
|
- wie ein Admin einen neuen Screen aus dem Web-UI provisioniert
|
||||||
|
- wie der Server Ansible-Playbooeke orchestriert
|
||||||
|
- wie der Fortschritt angezeigt wird
|
||||||
|
- Sicherheits- und Fehlerbehandlung
|
||||||
|
|
||||||
|
Grundlagen zur Provisionierungs-Strategie finden sich in `docs/PROVISIONIERUNGSKONZEPT.md`.
|
||||||
|
|
||||||
|
## 1. Provisionierungs-Workflow im Admin-UI
|
||||||
|
|
||||||
|
### Seite: Admin → Screens → Neu
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Neuen Screen provisionieren │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Schritt 1 — Grunddaten │
|
||||||
|
│ │
|
||||||
|
│ Screen-ID / Slug * │
|
||||||
|
│ [ info10 ] │
|
||||||
|
│ (muss eindeutig sein, alphanumerisch) │
|
||||||
|
│ │
|
||||||
|
│ Anzeigename * │
|
||||||
|
│ [ Infowand Bottom-Left ________________ ] │
|
||||||
|
│ │
|
||||||
|
│ Beschreibung │
|
||||||
|
│ [ Neue Infowand Display, pos. 7______ ] │
|
||||||
|
│ │
|
||||||
|
│ Device Type * │
|
||||||
|
│ ⦿ Raspberry Pi 4 │
|
||||||
|
│ ○ Raspberry Pi 5 │
|
||||||
|
│ ○ x86 Linux Kiosk │
|
||||||
|
│ │
|
||||||
|
│ Aufloesung * │
|
||||||
|
│ [1920 x 1080 ] Standard fuer RPi │
|
||||||
|
│ │
|
||||||
|
│ Orientierung * │
|
||||||
|
│ ⦿ portrait (hochkant) │
|
||||||
|
│ ○ landscape (quer) │
|
||||||
|
│ │
|
||||||
|
│ Tenant-Zuordnung │
|
||||||
|
│ [ Dropdown: alle Tenants + "admin" ] │
|
||||||
|
│ │
|
||||||
|
│ [Weiter >] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schritt 2 — Netzwerk- und SSH-Einstellung
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Schritt 2 — Zugang zur Hardware │
|
||||||
|
│ │
|
||||||
|
│ Ziel-IP-Adresse * │
|
||||||
|
│ [ 192.168.1.50 ] │
|
||||||
|
│ │
|
||||||
|
│ SSH-Port │
|
||||||
|
│ [ 22 ] Standard │
|
||||||
|
│ │
|
||||||
|
│ Bootstrap-Benutzer * │
|
||||||
|
│ ⦿ root │
|
||||||
|
│ ○ pi │
|
||||||
|
│ ○ custom: [ ________________ ] │
|
||||||
|
│ │
|
||||||
|
│ Bootstrap-Authentifizierung * │
|
||||||
|
│ ⦿ Passwort (initial, wird durch Key │
|
||||||
|
│ ersetzt): │
|
||||||
|
│ [ Passwort ____________ ] │
|
||||||
|
│ ○ SSH-Key (nur wenn vorvorhanden): │
|
||||||
|
│ [ Datei auswaehlen ] oder │
|
||||||
|
│ [ PEM-Key einfuegen ] │
|
||||||
|
│ │
|
||||||
|
│ Test-Verbindung │
|
||||||
|
│ [SSH Test] [PING Test] │
|
||||||
|
│ │
|
||||||
|
│ [Weiter >] [Zurueck] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schritt 3 — Konfiguration und Optionen
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Schritt 3 — Konfiguration │
|
||||||
|
│ │
|
||||||
|
│ Fallback-Verzeichnis (lokal auf Player) │
|
||||||
|
│ [ /var/lib/signage/fallback ] │
|
||||||
|
│ │
|
||||||
|
│ Snapshot-Intervall (Sekunden) │
|
||||||
|
│ [ 300 ] 0 = deaktiviert │
|
||||||
|
│ │
|
||||||
|
│ MQTT-Broker-Adresse (Zielserver) │
|
||||||
|
│ [ mqtt.example.com ] auto-gefuellt │
|
||||||
|
│ │
|
||||||
|
│ Server-API-Adresse │
|
||||||
|
│ [ https://signage.example.com/api ] │
|
||||||
|
│ auto-gefuellt │
|
||||||
|
│ │
|
||||||
|
│ Gruppen-Zuordnung (optional) │
|
||||||
|
│ [ Checkboxen: wall-all, wall-row-1 ] │
|
||||||
|
│ │
|
||||||
|
│ Tags / Labels (optional) │
|
||||||
|
│ [ mainfloor, hightrafficarea ] │
|
||||||
|
│ │
|
||||||
|
│ [Weiter >] [Zurueck] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schritt 4 — Review und Start
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Schritt 4 — Uebersicht & Start │
|
||||||
|
│ │
|
||||||
|
│ Zusammenfassung: │
|
||||||
|
│ │
|
||||||
|
│ Screen: info10 │
|
||||||
|
│ Name: Infowand Bottom-Left │
|
||||||
|
│ Typ: Raspberry Pi 4 │
|
||||||
|
│ IP: 192.168.1.50 │
|
||||||
|
│ Aufloesung: 1920 x 1080 │
|
||||||
|
│ Orientierung: portrait │
|
||||||
|
│ Tenant: admin │
|
||||||
|
│ │
|
||||||
|
│ SSH-Verbindung wird hergestellt... │
|
||||||
|
│ [✓] SSH-Zugang verifiziert │
|
||||||
|
│ [✓] Pfadberechtigungen ok │
|
||||||
|
│ [✓] Speicherplatz ausreichend (15GB) │
|
||||||
|
│ │
|
||||||
|
│ Provisioning-Playbook: │
|
||||||
|
│ [ ] site.yml │
|
||||||
|
│ ├─ signage_base (Packages, Kernel) │
|
||||||
|
│ ├─ signage_display (X11, Chromium) │
|
||||||
|
│ ├─ signage_player (Agent, Config) │
|
||||||
|
│ └─ signage_provision (Setup-Jobs) │
|
||||||
|
│ │
|
||||||
|
│ Warnung: │
|
||||||
|
│ ! Diesen Prozess kann nicht unterbrochen│
|
||||||
|
│ werden. Typische Dauer: 10-15 Min. │
|
||||||
|
│ │
|
||||||
|
│ [Provisioning starten] [Abbrechen] │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Provisioning-Job: Serverseitige Orchestrierung
|
||||||
|
|
||||||
|
### Architektur
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Admin-UI HTTP Request │
|
||||||
|
│ POST /api/v1/admin/provision │
|
||||||
|
└────────────┬────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Backend API (Go) │
|
||||||
|
│ - validiert Eingaben │
|
||||||
|
│ - erstellt ProvisioningJob in DB │
|
||||||
|
│ - queued Job in Job-Broker (Redis etc) │
|
||||||
|
└────────────┬────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Jobrunner Worker (Goroutine oder │
|
||||||
|
│ separater Go-Service) │
|
||||||
|
│ - laeuft im Server-Container │
|
||||||
|
│ - zeigt Fortschritt via Websocket │
|
||||||
|
└────────────┬────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Ansible Executor │
|
||||||
|
│ ansible-playbook site.yml │
|
||||||
|
│ -i inventory.ini │
|
||||||
|
│ -e vars.yml │
|
||||||
|
└────────────┬────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Target Device (Raspberry Pi) │
|
||||||
|
│ SSH: root@192.168.1.50 │
|
||||||
|
│ - installiert Packages │
|
||||||
|
│ - startet Services │
|
||||||
|
│ - synchonisiert Config │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Provisioning-Job-Modell
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE provisioning_jobs (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
screen_id UUID NOT NULL REFERENCES screens(id),
|
||||||
|
status TEXT NOT NULL CHECK (status IN (
|
||||||
|
'pending', 'running', 'completed', 'failed'
|
||||||
|
)),
|
||||||
|
started_at TIMESTAMPTZ,
|
||||||
|
completed_at TIMESTAMPTZ,
|
||||||
|
|
||||||
|
-- SSH/Ansible-Details
|
||||||
|
target_ip TEXT NOT NULL,
|
||||||
|
target_port INT NOT NULL DEFAULT 22,
|
||||||
|
target_user TEXT NOT NULL,
|
||||||
|
|
||||||
|
-- Verbrauch von Ressourcen
|
||||||
|
ansible_job_id TEXT, -- Job-ID aus Ansible-Executor
|
||||||
|
|
||||||
|
-- Fehlerbehandlung
|
||||||
|
error_log TEXT, -- bei failure
|
||||||
|
|
||||||
|
created_by_user_id TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Provisioning-Log-Modell
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE provisioning_logs (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
job_id UUID NOT NULL REFERENCES provisioning_jobs(id) ON DELETE CASCADE,
|
||||||
|
line_number INT NOT NULL,
|
||||||
|
timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
|
||||||
|
-- Quelle des Logs
|
||||||
|
source TEXT NOT NULL CHECK (source IN ('ansible', 'agent', 'system')),
|
||||||
|
level TEXT NOT NULL CHECK (level IN ('info', 'warn', 'error')),
|
||||||
|
|
||||||
|
-- Nachricht
|
||||||
|
message TEXT NOT NULL,
|
||||||
|
|
||||||
|
UNIQUE(job_id, line_number)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Jobrunner-Implementierung
|
||||||
|
|
||||||
|
### Job-Verarbeitung (Pseudocode)
|
||||||
|
|
||||||
|
```go
|
||||||
|
type ProvisioningJobRunner struct {
|
||||||
|
db *sql.DB
|
||||||
|
ansibleBinPath string
|
||||||
|
logChannel chan ProvisioningLogMessage
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProvisioningJobRunner) ProcessJob(ctx context.Context, jobID uuid.UUID) error {
|
||||||
|
// 1. Lade Job aus DB
|
||||||
|
job := r.db.GetProvisioningJob(jobID)
|
||||||
|
|
||||||
|
// 2. Setze Status auf "running"
|
||||||
|
r.db.UpdateProvisioningJob(job.ID, map[string]interface{}{
|
||||||
|
"status": "running",
|
||||||
|
"started_at": time.Now(),
|
||||||
|
})
|
||||||
|
|
||||||
|
// 3. Generiere Ansible-Inventar
|
||||||
|
inventory := r.generateInventory(job)
|
||||||
|
// [192.168.1.50]
|
||||||
|
// ansible_user=root
|
||||||
|
// ansible_password=***
|
||||||
|
// screen_id=info10
|
||||||
|
// ansible_become=yes
|
||||||
|
|
||||||
|
// 4. Generiere vars.yml
|
||||||
|
vars := r.generateVars(job)
|
||||||
|
// screen_id: info10
|
||||||
|
// display_name: "Infowand Bottom-Left"
|
||||||
|
// orientation: portrait
|
||||||
|
// mqtt_broker: mqtt.example.com
|
||||||
|
// etc.
|
||||||
|
|
||||||
|
// 5. Fuehre Ansible aus
|
||||||
|
cmd := exec.CommandContext(ctx,
|
||||||
|
r.ansibleBinPath,
|
||||||
|
"site.yml",
|
||||||
|
"-i", inventoryPath,
|
||||||
|
"-e", varsPath,
|
||||||
|
"-v", // verbose
|
||||||
|
)
|
||||||
|
|
||||||
|
// 6. Piping: Ansible-Ausgabe → Log-Dateien + Websocket
|
||||||
|
stdout, _ := cmd.StdoutPipe()
|
||||||
|
stderr, _ := cmd.StderrPipe()
|
||||||
|
|
||||||
|
go r.streamLogs(job.ID, stdout, "ansible")
|
||||||
|
go r.streamLogs(job.ID, stderr, "ansible")
|
||||||
|
|
||||||
|
// 7. Warte auf Completion
|
||||||
|
err := cmd.Run()
|
||||||
|
|
||||||
|
// 8. Aktualisiere Job-Status
|
||||||
|
if err != nil {
|
||||||
|
r.db.UpdateProvisioningJob(job.ID, map[string]interface{}{
|
||||||
|
"status": "failed",
|
||||||
|
"completed_at": time.Now(),
|
||||||
|
"error_log": err.Error(),
|
||||||
|
})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
r.db.UpdateProvisioningJob(job.ID, map[string]interface{}{
|
||||||
|
"status": "completed",
|
||||||
|
"completed_at": time.Now(),
|
||||||
|
})
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProvisioningJobRunner) streamLogs(jobID uuid.UUID, reader io.Reader, source string) {
|
||||||
|
scanner := bufio.NewScanner(reader)
|
||||||
|
lineNum := 1
|
||||||
|
|
||||||
|
for scanner.Scan() {
|
||||||
|
line := scanner.Text()
|
||||||
|
|
||||||
|
// Persistiere in DB
|
||||||
|
r.db.InsertProvisioningLog(ProvisioningLog{
|
||||||
|
JobID: jobID,
|
||||||
|
LineNumber: lineNum,
|
||||||
|
Source: source,
|
||||||
|
Level: parseLogLevel(line), // heuristic
|
||||||
|
Message: line,
|
||||||
|
})
|
||||||
|
|
||||||
|
// Schreibe ins Websocket (siehe Abschnitt "Fortschritt")
|
||||||
|
r.logChannel <- ProvisioningLogMessage{
|
||||||
|
JobID: jobID,
|
||||||
|
Line: line,
|
||||||
|
}
|
||||||
|
|
||||||
|
lineNum++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ansible-Ausfuehrung mit Jumphost (optional)
|
||||||
|
|
||||||
|
Falls der Server nicht direkt die Zielgeraete erreicht, kann ein Jumphost verwendet werden:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# ansible.cfg
|
||||||
|
[defaults]
|
||||||
|
inventory = inventory.ini
|
||||||
|
host_key_checking = False
|
||||||
|
retries = 3
|
||||||
|
|
||||||
|
[privilege_escalation]
|
||||||
|
become = True
|
||||||
|
become_method = sudo
|
||||||
|
```
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# inventory.ini fuer Jumphost-Szenario
|
||||||
|
[targets]
|
||||||
|
192.168.1.50 ansible_user=root ansible_password=*** \
|
||||||
|
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p jumphost@example.com"'
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Fortschritt und Live-Updates
|
||||||
|
|
||||||
|
### Websocket-Kanal fuer Echtzeit-Logs
|
||||||
|
|
||||||
|
**HTTP-Upgrade zu Websocket:**
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v1/admin/provision/{jobID}/logs
|
||||||
|
Upgrade: websocket
|
||||||
|
Connection: Upgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
**Server sendet kontinuierlich:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "log_line",
|
||||||
|
"timestamp": "2025-03-25T14:22:00Z",
|
||||||
|
"line": "TASK [signage_base : Update package cache] **",
|
||||||
|
"source": "ansible",
|
||||||
|
"level": "info"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "progress",
|
||||||
|
"timestamp": "2025-03-25T14:22:15Z",
|
||||||
|
"current_task": "signage_base : Update package cache",
|
||||||
|
"task_number": 3,
|
||||||
|
"total_tasks": 12,
|
||||||
|
"percent": 25
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "status_change",
|
||||||
|
"timestamp": "2025-03-25T14:35:00Z",
|
||||||
|
"status": "completed",
|
||||||
|
"duration_seconds": 780
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### UI-Anzeige waehrend Provisioning
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Provisioning laeuft: info10 │
|
||||||
|
│ Gestartet: vor 5 Min. │
|
||||||
|
│ Geschaetzte verbleibende Zeit: 8 Min. │
|
||||||
|
├──────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ [████████████░░░░░░░░░░░░░░] 33% │
|
||||||
|
│ │
|
||||||
|
│ Aktuelle Aufgabe: │
|
||||||
|
│ ⊙ signage_base : Update package cache │
|
||||||
|
│ │
|
||||||
|
│ Letzte Logs: │
|
||||||
|
│ ├─ [14:22:00] TASK [signage_base ...] │
|
||||||
|
│ ├─ [14:22:05] ok: [192.168.1.50] │
|
||||||
|
│ ├─ [14:22:10] TASK [signage_display] │
|
||||||
|
│ ├─ [14:22:15] Chromium wird installiert│
|
||||||
|
│ └─ [14:22:20] ... │
|
||||||
|
│ │
|
||||||
|
│ [Auto-Refresh] [Pause] [Abbrechen] │
|
||||||
|
│ (Abbrechen: SSH-Verbindung wird nicht │
|
||||||
|
│ sofort getrennt, aber Job gestoppt) │
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Fehlerbehandlung und Recovery
|
||||||
|
|
||||||
|
### Fehlerszenarien
|
||||||
|
|
||||||
|
| Fehler | Grund | Recovery |
|
||||||
|
|---|---|---|
|
||||||
|
| SSH-Verbindung fehlgeschlagen | IP falsch, Passwort falsch, Firewall | Logs zeigen SSH-Error, Admin kann Credentials korrigieren und neu starten |
|
||||||
|
| Ansible-Playbook fehlgeschlagen | Paket-Versionskonflikt, Platz voll | Logs zeigen welcher Task fehlgeschlagen, Admin kann manuell SSH-en oder Job wiederholen |
|
||||||
|
| Timeout nach 30 Min. | Sehr langsame Netzwerk oder Device haengt | Job wird abgebrochen, Admin kann Verbindung checken und neu starten |
|
||||||
|
| Package-Download fehlgeschlagen | Mirror offline, Netzwerk unterbrochen | Ansible retry automatisch 3x, Logs zeigen wget-Error |
|
||||||
|
|
||||||
|
### Retry-Logik
|
||||||
|
|
||||||
|
```
|
||||||
|
Strategie: Exponentieller Backoff fuer Playbook-Fehler
|
||||||
|
Fehler 1: Sofort wiederholen
|
||||||
|
Fehler 2: Warte 5s, wiederhole
|
||||||
|
Fehler 3: Warte 15s, wiederhole
|
||||||
|
Fehler 4+: Gib auf, zeige Fehler
|
||||||
|
```
|
||||||
|
|
||||||
|
### Admin-Recovery
|
||||||
|
|
||||||
|
Falls ein Job fehlgeschlagen ist:
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────┐
|
||||||
|
│ Provisioning fehlgeschlagen: info10 │
|
||||||
|
│ │
|
||||||
|
│ Fehler: │
|
||||||
|
│ ssh: Could not resolve hostname │
|
||||||
|
│ (DNS-Fehler oder Geraet nicht erreichbar)│
|
||||||
|
│ │
|
||||||
|
│ Empfehlung: │
|
||||||
|
│ 1. IP-Adresse pruefen │
|
||||||
|
│ 2. Geraet von Hand SSH-en und testen │
|
||||||
|
│ 3. Job neu starten: [Neuer Versuch] │
|
||||||
|
│ │
|
||||||
|
│ Komplette Logs herunterladen: │
|
||||||
|
│ [logs-info10-20250325.txt] │
|
||||||
|
│ │
|
||||||
|
│ [Neuer Versuch] [Logs zeigen] [Zurueck]│
|
||||||
|
└──────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Sicherheitsaspekte
|
||||||
|
|
||||||
|
### SSH-Key-Verwaltung
|
||||||
|
|
||||||
|
**Phase 1 — Bootstrap mit Passwort:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Admin gibt Passwort ein
|
||||||
|
↓
|
||||||
|
Server speichert Passwort NICHT
|
||||||
|
↓
|
||||||
|
Server uebergibt an Ansible nur waehrend dieser Session
|
||||||
|
↓
|
||||||
|
Ansible loggt sich ein, generiert SSH-Key
|
||||||
|
↓
|
||||||
|
SSH-Key wird auf dem Geraet als authorized_key eingetragen
|
||||||
|
↓
|
||||||
|
Passwort wird auf dem Geraet gelöscht oder deaktiviert
|
||||||
|
```
|
||||||
|
|
||||||
|
**Phase 2 — Dauerhaft mit SSH-Key:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Server speichert SSH-Key in Secrets-Backend (z.B. HashiCorp Vault)
|
||||||
|
Zukuenftige Ansible-Lauefe verwenden den Key
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ansible-Vault fuer sensitive Daten
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# roles/signage_player/defaults/main.yml
|
||||||
|
server_api_key: !vault |
|
||||||
|
$ANSIBLE_VAULT;1.1;AES256
|
||||||
|
abcd1234...
|
||||||
|
```
|
||||||
|
|
||||||
|
Die Vault-Passphrase wird:
|
||||||
|
|
||||||
|
- nie im Klartext gelagert
|
||||||
|
- vom Server nur zur Laufzeit an Ansible uebergeben
|
||||||
|
- in Logs nicht ausgegeben
|
||||||
|
|
||||||
|
### Sudo ohne Passwort
|
||||||
|
|
||||||
|
Ansible erhoeht die Rechte per `sudo` ohne Passwort-Eingabe:
|
||||||
|
|
||||||
|
```sudoers
|
||||||
|
# /etc/sudoers.d/ansible-signage
|
||||||
|
ansible ALL=(ALL) NOPASSWD: ALL
|
||||||
|
```
|
||||||
|
|
||||||
|
(Alternativ: mit Passwort, das Ansible am Anfang einmal abfragt)
|
||||||
|
|
||||||
|
## 7. Verbindung zum bestehenden System
|
||||||
|
|
||||||
|
### Provisioning-Trigger aus Admin-UI
|
||||||
|
|
||||||
|
```
|
||||||
|
Admin-Seite: Screens → "+ Neuer Screen"
|
||||||
|
↓
|
||||||
|
Formular sammelt Grunddaten
|
||||||
|
↓
|
||||||
|
POST /api/v1/admin/provision
|
||||||
|
↓
|
||||||
|
Backend:
|
||||||
|
1. Screen in `screens` Tabelle eintragen
|
||||||
|
2. ProvisioningJob in `provisioning_jobs` anlegen
|
||||||
|
3. Job in Broker queuen
|
||||||
|
↓
|
||||||
|
Jobrunner:
|
||||||
|
1. Holt Job aus Broker
|
||||||
|
2. Startet Ansible
|
||||||
|
3. Streamt Logs via Websocket
|
||||||
|
4. Aktualisiert Job-Status bei Completion
|
||||||
|
↓
|
||||||
|
Admin sieht Live-Updates im UI
|
||||||
|
```
|
||||||
|
|
||||||
|
### Nach erfolgreichem Provisioning
|
||||||
|
|
||||||
|
```
|
||||||
|
Job-Status: "completed"
|
||||||
|
↓
|
||||||
|
Agent auf dem Display startet
|
||||||
|
↓
|
||||||
|
Agent registriert sich beim Server
|
||||||
|
↓
|
||||||
|
Server setzt Screen-Status auf "online"
|
||||||
|
↓
|
||||||
|
Admin sieht Screen in Tabelle mit Status "online"
|
||||||
|
↓
|
||||||
|
Admin kann sofort Kampagnen/Playlists zuweisen
|
||||||
|
```
|
||||||
|
|
||||||
|
## 8. Konfigurierbare Parameter
|
||||||
|
|
||||||
|
In `/etc/signage/provision.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
jobrunner:
|
||||||
|
max_concurrent_jobs: 3
|
||||||
|
ansible_timeout_sec: 1800
|
||||||
|
playbook_path: "/srv/ansible/site.yml"
|
||||||
|
inventory_template_path: "/srv/ansible/inventory.ini.tpl"
|
||||||
|
vars_template_path: "/srv/ansible/vars.yml.tpl"
|
||||||
|
|
||||||
|
ssh:
|
||||||
|
known_hosts_file: "/etc/signage/.ssh/known_hosts"
|
||||||
|
key_storage: "vault" # oder "filesystem"
|
||||||
|
|
||||||
|
ansible:
|
||||||
|
verbosity: "-vv" # oder "-v", "-vvv"
|
||||||
|
extra_args: ""
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. Zusammenfassung
|
||||||
|
|
||||||
|
Der Jobrunner:
|
||||||
|
|
||||||
|
- **ist web-gesteuert** — Provisioning-UI mit Multi-Step-Wizard
|
||||||
|
- **ist automatisiert** — Ansible Playbooks, nicht manuelle SSH-Kommandos
|
||||||
|
- **ist transparent** — Live-Logs und Fortschritt-Anzeige
|
||||||
|
- **ist sicher** — SSH-Keys, Ansible-Vault, keine Plaintext-Credentials in Logs
|
||||||
|
- **ist resilient** — Retry-Logik und Error-Recovery
|
||||||
|
- **ist erweiterbar** — neue Rollen und Tasks koennen hinzugefuegt werden ohne UI-Aenderung
|
||||||
494
docs/TEMPLATE-EDITOR.md
Normal file
494
docs/TEMPLATE-EDITOR.md
Normal file
|
|
@ -0,0 +1,494 @@
|
||||||
|
# Info-Board Neu - Template-Editor fuer globale Kampagnen
|
||||||
|
|
||||||
|
## Ziel
|
||||||
|
|
||||||
|
Der Template-Editor ist ein Bereich des Admin-UI fuer die fachliche Erstellung und Verwaltung globaler Templates und deren operativen Aktivierungen als Kampagnen.
|
||||||
|
|
||||||
|
Dieses Dokument definiert:
|
||||||
|
|
||||||
|
- Welche Schritte ein Admin unternimmt, um ein Template zu erstellen
|
||||||
|
- Welche Felder und Optionen der Editor anbietet
|
||||||
|
- Wie Templates zu Kampagnen aktiviert werden
|
||||||
|
- Wie die Abbildung im Datenmodell aussieht
|
||||||
|
|
||||||
|
Grundlagen zu Template-Typen, Slot-Modell und Message-Wall finden sich in `docs/TEMPLATE-KONZEPT.md`.
|
||||||
|
|
||||||
|
## 1. Template-Verwaltung
|
||||||
|
|
||||||
|
### Template-Liste
|
||||||
|
|
||||||
|
**Seite:** Admin → Templates
|
||||||
|
|
||||||
|
**Anzeige:**
|
||||||
|
|
||||||
|
Tabelle mit allen Templates:
|
||||||
|
|
||||||
|
| Name | Typ | Zielgruppe | Szenen | Erstellt | Status |
|
||||||
|
|---|---|---|---|---|---|
|
||||||
|
| Weihnachtsmotiv 2025 | full_screen_media | alle | 1 | 2025-01-15 | draft |
|
||||||
|
| Schriftzug Infowand | message_wall | wall-all | 9 | 2025-02-01 | active |
|
||||||
|
| Event-Tag 25.03 | screen_specific_scene | [info01, info02, ...] | 2 | 2025-03-01 | draft |
|
||||||
|
|
||||||
|
**Aktionen pro Zeile:**
|
||||||
|
|
||||||
|
- "Bearbeiten" — öffnet Template-Editor
|
||||||
|
- "Kopieren" — dupliziert als neue Draft
|
||||||
|
- "Löschen" — nur wenn keine aktiven Kampagnen
|
||||||
|
- "Vorschau" — zeigt Layout (fuer message_wall) oder Asset-Galerien
|
||||||
|
- "Aktivieren" — schneller Weg zu Kampagne starten
|
||||||
|
|
||||||
|
### Template-Editor (Erstellung/Bearbeitung)
|
||||||
|
|
||||||
|
#### Phase 1 — Grunddaten
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Neues Template erstellen │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Name * │
|
||||||
|
│ [ Weihnachtsmotiv 2025_______________ ]│
|
||||||
|
│ technischer slug wird automatisch │
|
||||||
|
│ │
|
||||||
|
│ Template-Typ * │
|
||||||
|
│ ⦿ full_screen_media │
|
||||||
|
│ ○ message_wall │
|
||||||
|
│ ○ screen_specific_scene │
|
||||||
|
│ │
|
||||||
|
│ Beschreibung │
|
||||||
|
│ [ Weihnachtliche Grafik fuer alle___ ] │
|
||||||
|
│ [ Screens __________________________ ]│
|
||||||
|
│ │
|
||||||
|
│ Zielgruppe / Screens * │
|
||||||
|
│ ⦿ Alle Screens │
|
||||||
|
│ ○ Nach Gruppe auswaehlen │
|
||||||
|
│ [Dropdown: wall-all, single-all, ...] │
|
||||||
|
│ ○ Einzelne Screens auswaehlen │
|
||||||
|
│ [Checkbox-Liste mit Filterung] │
|
||||||
|
│ │
|
||||||
|
│ [Weiter >] [Abbrechen] │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Validierung:**
|
||||||
|
|
||||||
|
- Name ist erforderlich
|
||||||
|
- Name ist eindeutig
|
||||||
|
- Template-Typ ist erforderlich
|
||||||
|
- Zielgruppe ist erforderlich (keine leere Zuweisung)
|
||||||
|
|
||||||
|
#### Phase 2 — Szenen/Inhalte
|
||||||
|
|
||||||
|
Fuer `full_screen_media`:
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Szenen und Inhalte │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Szene 1: Vollbild-Grafik │
|
||||||
|
│ │
|
||||||
|
│ Medientyp * │
|
||||||
|
│ ○ Bild │
|
||||||
|
│ ○ Video │
|
||||||
|
│ ○ PDF │
|
||||||
|
│ ⦿ Webseite (HTML) │
|
||||||
|
│ │
|
||||||
|
│ Portrait-Asset (Hochformat) │
|
||||||
|
│ [Upload oder URL] │
|
||||||
|
│ [ Datei auswaehlen ] [Neue URL] │
|
||||||
|
│ oder vorher gemanagte Assets: [Liste] │
|
||||||
|
│ │
|
||||||
|
│ Landscape-Asset (Querformat) [optional] │
|
||||||
|
│ [ Datei auswaehlen ] [Neue URL] │
|
||||||
|
│ │
|
||||||
|
│ Anzeigedauer (Sekunden) │
|
||||||
|
│ [60_____] Standard: 10 │
|
||||||
|
│ │
|
||||||
|
│ Load-Timeout (Sekunden) │
|
||||||
|
│ [10_____] Standard: 10 │
|
||||||
|
│ │
|
||||||
|
│ gueltig ab │
|
||||||
|
│ [ 2025-03-25 ] [ 00:00 ] │
|
||||||
|
│ (leer = sofort gueltig) │
|
||||||
|
│ │
|
||||||
|
│ gueltig bis │
|
||||||
|
│ [ 2025-04-01 ] [ 00:00 ] │
|
||||||
|
│ (leer = unendlich) │
|
||||||
|
│ │
|
||||||
|
│ [+ Weitere Szene hinzufuegen] │
|
||||||
|
│ │
|
||||||
|
│ [Zurueck <] [Speichern & Aktivieren] │
|
||||||
|
│ [Speichern] │
|
||||||
|
│ [Abbrechen] │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Fuer `message_wall`:
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Message-Wall Layout │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Layout-Template │
|
||||||
|
│ [Dropdown: 3x3-Grid, 2x2-Grid, ...] │
|
||||||
|
│ │
|
||||||
|
│ Anzeigedauer (Sekunden) │
|
||||||
|
│ [10_____] │
|
||||||
|
│ │
|
||||||
|
│ Gesamt-Grafik oder Text eingeben │
|
||||||
|
│ [Rich-Text-Editor oder Bild-Upload] │
|
||||||
|
│ │
|
||||||
|
│ Vorschau: [Zeigt Einteilung in Slots] │
|
||||||
|
│ │
|
||||||
|
│ Slot-Zuordnung: [Interaktive Zuordnung] │
|
||||||
|
│ Slot wall-r1-c1 → Screen info01 │
|
||||||
|
│ Slot wall-r1-c2 → Screen info02 │
|
||||||
|
│ ... (9 Slots insgesamt) │
|
||||||
|
│ │
|
||||||
|
│ [+ Layout-Typ aendernx] [Speichern] │
|
||||||
|
│ │
|
||||||
|
│ [Zurueck <] [Speichern & Aktivieren] │
|
||||||
|
│ [Speichern] │
|
||||||
|
│ [Abbrechen] │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Fuer `screen_specific_scene`:
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Monitorindividuelle Szenen │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Szene 1: Infowand │
|
||||||
|
│ │
|
||||||
|
│ Zielgruppe │
|
||||||
|
│ ⦿ Gruppe: [Dropdown: wall-all] │
|
||||||
|
│ ○ Einzelne Screens: [Checkboxen] │
|
||||||
|
│ │
|
||||||
|
│ Asset │
|
||||||
|
│ [Upload oder URL] │
|
||||||
|
│ │
|
||||||
|
│ Dauer, Timeout, gueltig_von/bis │
|
||||||
|
│ [... wie oben ...] │
|
||||||
|
│ │
|
||||||
|
│ [+ Weitere Szene hinzufuegen] │
|
||||||
|
│ │
|
||||||
|
│ [Zurueck <] [Speichern & Aktivieren] │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Kampagnen-Verwaltung
|
||||||
|
|
||||||
|
Kampagnen sind die operativen Instanzen von Templates.
|
||||||
|
|
||||||
|
### Kampagnen-Liste
|
||||||
|
|
||||||
|
**Seite:** Admin → Kampagnen
|
||||||
|
|
||||||
|
**Anzeige:**
|
||||||
|
|
||||||
|
| Name | Template | Aktiv | Zielgruppe | gueltig von | gueltig bis | Betroffene Screens |
|
||||||
|
|---|---|---|---|---|---|---|
|
||||||
|
| Weihnachten Dekoration | Weihnachtsmotiv 2025 | ✓ | alle | 2025-12-01 | 2025-12-26 | 13 Screens |
|
||||||
|
| Schriftzug Januar | Schriftzug Infowand | ✗ | wall-all | 2025-01-06 | 2025-01-31 | 9 Screens |
|
||||||
|
|
||||||
|
**Aktionen:**
|
||||||
|
|
||||||
|
- "Bearbeiten" — Kampagnen-Eigenschaften aendern
|
||||||
|
- "Aktivieren/Deaktivieren" — Toggle sofort
|
||||||
|
- "Vorschau" — zeigt betroffene Screens mit Rendering
|
||||||
|
- "Duplizieugen" — als neue Kampagne mit anderem Template
|
||||||
|
- "Loeschen" — wenn inaktiv und abgelaufen
|
||||||
|
|
||||||
|
### Neue Kampagne starten
|
||||||
|
|
||||||
|
**Workflow Option 1 — Von Template aus:**
|
||||||
|
|
||||||
|
Template-Liste → [Template] → "Aktivieren"
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Kampagne starten: Weihnachtsmotiv 2025 │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Kampagnen-Name │
|
||||||
|
│ [ Weihnachten 2025 einfuehrung____ ] │
|
||||||
|
│ │
|
||||||
|
│ Aktiv ab sofort? │
|
||||||
|
│ ⦿ Ja │
|
||||||
|
│ ○ Geplant fuer: [Datum/Zeit auswaehlen]│
|
||||||
|
│ [ 2025-12-01 ] [ 09:00 ] │
|
||||||
|
│ │
|
||||||
|
│ Gueltig von │
|
||||||
|
│ [ 2025-12-01 ] [ 00:00 ] │
|
||||||
|
│ │
|
||||||
|
│ Gueltig bis │
|
||||||
|
│ [ 2025-12-26 ] [ 23:59 ] │
|
||||||
|
│ │
|
||||||
|
│ Prioritaet (gegenueber Playlist) │
|
||||||
|
│ [1 (hoehere Werte sind wichtiger)] ___ │
|
||||||
|
│ │
|
||||||
|
│ Auto-Deaktivierung bei Ablauf? │
|
||||||
|
│ ⦿ Ja │
|
||||||
|
│ ○ Nein (Kampagne bleibt inaktiv) │
|
||||||
|
│ │
|
||||||
|
│ [Kampagne starten] [Abbrechen] │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Workflow Option 2 — Neue Kampagne ohne Template:**
|
||||||
|
|
||||||
|
Admin → Kampagnen → "+ Neue Kampagne"
|
||||||
|
|
||||||
|
```
|
||||||
|
[Template auswaehlen] → [Grunddaten] → [Aktivierung]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Kampagnen-Detailseite
|
||||||
|
|
||||||
|
**Anzeige einer laufenden Kampagne:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Kampagne: Weihnachten 2025 einfuehrung
|
||||||
|
Status: AKTIV seit 2025-12-01 09:00
|
||||||
|
|
||||||
|
Template: Weihnachtsmotiv 2025 (full_screen_media)
|
||||||
|
Zielgruppe: Alle (13 Screens)
|
||||||
|
|
||||||
|
Gueltig: 2025-12-01 00:00 bis 2025-12-26 23:59
|
||||||
|
Prioritaet: 1
|
||||||
|
|
||||||
|
Betroffene Screens:
|
||||||
|
┌──────────────────────────────┐
|
||||||
|
│ info01 online aktiv │ [Screenshot]
|
||||||
|
│ info02 online aktiv │ [Screenshot]
|
||||||
|
│ info03 offline ausstehend │
|
||||||
|
│ info04 online aktiv │ [Screenshot]
|
||||||
|
│ ... (10 weitere) ... │
|
||||||
|
└──────────────────────────────┘
|
||||||
|
|
||||||
|
Aktionen:
|
||||||
|
[Deaktivieren] [Bearbeiten] [Vorschau aendernx]
|
||||||
|
|
||||||
|
Aktivierungsverlauf:
|
||||||
|
2025-12-01 09:00 — Kampagne gestartet von admin@...
|
||||||
|
2025-12-01 09:05 — 9 Screens haben gerendert
|
||||||
|
2025-12-01 10:30 — info03 ging offline, Kampagnen-Inhalt wartet auf Rueckkehr
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Verknuepfung zur Prioritaetsregel
|
||||||
|
|
||||||
|
Die Regel `campaign > tenant_playlist > fallback` ist:
|
||||||
|
|
||||||
|
- **hardcoded** im Player
|
||||||
|
- **administrierbar** ueber die Kampagnen-Aktivierung
|
||||||
|
- **vorhersagbar** durch klare Doku
|
||||||
|
|
||||||
|
### Abbildung im System
|
||||||
|
|
||||||
|
```
|
||||||
|
Fuer jeden Screen:
|
||||||
|
IF Kampagne fuer diesen Screen aktiv UND gueltig_von <= jetzt <= gueltig_bis
|
||||||
|
THEN Zeige Kampagnen-Inhalt
|
||||||
|
ELSE IF Tenant-Playlist hat gueltige Items
|
||||||
|
THEN Zeige Tenant-Playlist
|
||||||
|
ELSE
|
||||||
|
Zeige Fallback
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese Logik wird:
|
||||||
|
|
||||||
|
1. **Serverseitig** berechnet bei jedem Sync-Request (HTTP `/api/v1/screens/{screenSlug}/playlist`)
|
||||||
|
2. **Spielerseitig** nochmals geprueft beim Rendering (fuer Offline-Robustheit)
|
||||||
|
|
||||||
|
### Admin-Sichtbarkeit
|
||||||
|
|
||||||
|
Die Admin-UI zeigt auf der Seite "Screens" fuer jeden Monitor:
|
||||||
|
|
||||||
|
```
|
||||||
|
info01
|
||||||
|
├── Kampagne (AKTIV bis 2025-12-26)
|
||||||
|
│ └── Weihnachten 2025 einfuehrung
|
||||||
|
├── Fallback (wird nach Kampagnen-Ablauf gezeigt)
|
||||||
|
└── Tenant Playlist
|
||||||
|
├── Playlist A (Tenant XYZ)
|
||||||
|
│ ├── Bild-1 (gueltig bis 2025-04-01)
|
||||||
|
│ ├── Video-2 (laedt...)
|
||||||
|
│ └── Webseite-3
|
||||||
|
└── Fallback-Verzeichnis
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese View zeigt, was der Screen **aktuell gerade zeigt** und warum.
|
||||||
|
|
||||||
|
## 4. Datenmodell
|
||||||
|
|
||||||
|
### Tabelle `templates`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE templates (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
slug TEXT NOT NULL UNIQUE,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
description TEXT,
|
||||||
|
template_type TEXT NOT NULL CHECK (template_type IN ('message_wall', 'full_screen_media', 'screen_specific_scene')),
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
created_by_user_id TEXT NOT NULL,
|
||||||
|
|
||||||
|
-- Serializierte Konfiguration (JSON)
|
||||||
|
config JSONB NOT NULL DEFAULT '{}'
|
||||||
|
-- Beispiele:
|
||||||
|
-- {
|
||||||
|
-- "target_mode": "all_screens" | "group" | "specific_screens",
|
||||||
|
-- "target_group": "wall-all" (wenn target_mode = "group"),
|
||||||
|
-- "target_screen_ids": ["..."] (wenn target_mode = "specific_screens"),
|
||||||
|
-- "scenes": [
|
||||||
|
-- {
|
||||||
|
-- "media_type": "image|video|pdf|webpage|html",
|
||||||
|
-- "asset_id": "...",
|
||||||
|
-- "portrait_asset_id": "..." (optional),
|
||||||
|
-- "landscape_asset_id": "..." (optional),
|
||||||
|
-- "duration_sec": 10,
|
||||||
|
-- "load_timeout_sec": 10,
|
||||||
|
-- "valid_from": "2025-03-25T00:00:00Z",
|
||||||
|
-- "valid_until": "2025-04-01T23:59:59Z"
|
||||||
|
-- }
|
||||||
|
-- ]
|
||||||
|
-- }
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tabelle `campaigns`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE campaigns (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
template_id UUID NOT NULL REFERENCES templates(id),
|
||||||
|
active BOOLEAN NOT NULL DEFAULT false,
|
||||||
|
priority INT NOT NULL DEFAULT 1,
|
||||||
|
valid_from TIMESTAMPTZ NOT NULL,
|
||||||
|
valid_until TIMESTAMPTZ,
|
||||||
|
auto_deactivate BOOLEAN NOT NULL DEFAULT true,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
created_by_user_id TEXT NOT NULL,
|
||||||
|
|
||||||
|
-- ueberschreiben/erweitern Template-Zielgruppe (optional)
|
||||||
|
target_mode TEXT CHECK (target_mode IN ('template', 'all_screens', 'group', 'specific_screens')),
|
||||||
|
target_group TEXT,
|
||||||
|
target_screen_ids UUID[] DEFAULT '{}'::uuid[]
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tabelle `campaign_screen_assignments` (generiert)
|
||||||
|
|
||||||
|
Diese Tabelle wird **serverseitig** generiert/gepflegt, wenn eine Kampagne aktiv wird.
|
||||||
|
|
||||||
|
Sie expandiert Gruppen in konkrete Screen-IDs:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE campaign_screen_assignments (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
campaign_id UUID NOT NULL REFERENCES campaigns(id) ON DELETE CASCADE,
|
||||||
|
screen_id UUID NOT NULL REFERENCES screens(id) ON DELETE CASCADE,
|
||||||
|
assigned_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
UNIQUE(campaign_id, screen_id)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Logik:**
|
||||||
|
|
||||||
|
```
|
||||||
|
IF campaign.target_mode = 'template'
|
||||||
|
THEN Fuelle campaign_screen_assignments aus template.config.target_screen_ids
|
||||||
|
ELSE IF campaign.target_mode = 'group'
|
||||||
|
THEN Fuelle campaign_screen_assignments aus allen Screens in campaign.target_group
|
||||||
|
ELSE IF campaign.target_mode = 'specific_screens'
|
||||||
|
THEN Fuelle campaign_screen_assignments aus campaign.target_screen_ids
|
||||||
|
ELSE
|
||||||
|
(alle Screens)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Praxis-Beispiele
|
||||||
|
|
||||||
|
### Beispiel 1 — Weihnachtsplakatierung (full_screen_media)
|
||||||
|
|
||||||
|
**Szenario:**
|
||||||
|
|
||||||
|
Admin will ab 01.12.2025 fuer 4 Wochen ein rotes Weihnachtsmotiv auf allen Screens zeigen.
|
||||||
|
|
||||||
|
**Schritte:**
|
||||||
|
|
||||||
|
1. Admin → Templates → "+ Neues Template"
|
||||||
|
- Name: `Weihnachtsmotiv 2025`
|
||||||
|
- Typ: `full_screen_media`
|
||||||
|
- Zielgruppe: `Alle Screens`
|
||||||
|
|
||||||
|
2. Szene hinzufuegen:
|
||||||
|
- Bild hochladen (passend fuer Portrait und Landscape)
|
||||||
|
- Dauer: 10 Sekunden
|
||||||
|
|
||||||
|
3. Speichern → Editor zeigt Draft mit Vorschau
|
||||||
|
|
||||||
|
4. Admin → Templates → [Weihnachtsmotiv 2025] → "Aktivieren"
|
||||||
|
- Kampagnen-Name: `Weihnachten 2025 globale Dekoration`
|
||||||
|
- Gueltig von: 2025-12-01
|
||||||
|
- Gueltig bis: 2025-12-26
|
||||||
|
- Aktiv ab: sofort
|
||||||
|
|
||||||
|
5. Kampagne speichern → Sofort sichtbar auf allen Screens
|
||||||
|
|
||||||
|
### Beispiel 2 — Schriftzug ueber die Infowand (message_wall)
|
||||||
|
|
||||||
|
**Szenario:**
|
||||||
|
|
||||||
|
Admin hat eine neue `message_wall`-Gruppe "wall-all" mit 9 Screens. Er will ein riesiges rotes Schriftzug-Motiv aufteilen und auf allen 9 Screens verteilen.
|
||||||
|
|
||||||
|
**Schritte:**
|
||||||
|
|
||||||
|
1. Admin → Templates → "+ Neues Template"
|
||||||
|
- Name: `Rotes Schriftzug auf Infowand`
|
||||||
|
- Typ: `message_wall`
|
||||||
|
- Zielgruppe: `Gruppe: wall-all`
|
||||||
|
|
||||||
|
2. Layout waehlen: `3x3-Grid` (passt zu 9 Screens)
|
||||||
|
|
||||||
|
3. Gesamte Grafik hochladen (oder als Text eingeben)
|
||||||
|
|
||||||
|
4. Slot-Zuordnung:
|
||||||
|
- System zeigt interaktive 3x3-Vorschau
|
||||||
|
- Admin tuen: "Slot 1 → info01", "Slot 2 → info02", ...
|
||||||
|
- System generiert automatisch die Crop-Regionen
|
||||||
|
|
||||||
|
5. Speichern + Aktivieren
|
||||||
|
- Jeder Screen zeigt seinen Ausschnitt
|
||||||
|
|
||||||
|
### Beispiel 3 — Deaktivierung und Fallback
|
||||||
|
|
||||||
|
**Szenario:**
|
||||||
|
|
||||||
|
Kampagne laueft seit 2 Wochen. Admin will sie sofort stoppen, damit Screens auf ihre normalen Playlists zurueckfallen.
|
||||||
|
|
||||||
|
**Aktion:**
|
||||||
|
|
||||||
|
Admin → Kampagnen → [Kampagne] → "Deaktivieren"
|
||||||
|
|
||||||
|
**Folge:**
|
||||||
|
|
||||||
|
- Server setzt `campaigns.active = false`
|
||||||
|
- Bei naechstem Sync ladet jeder Player wieder die Tenant-Playlist
|
||||||
|
- Fallback-Verzeichnis wird nur noch angezeigt, wenn tenantbezogene Playlist leer ist
|
||||||
|
|
||||||
|
## 6. Zusammenfassung
|
||||||
|
|
||||||
|
Der Template-Editor:
|
||||||
|
|
||||||
|
- **ist zwei-stufig** — Template-Verwaltung + Kampagnen-Aktivierung
|
||||||
|
- **ist intuitiv** — Multi-Step-Formulare mit Vorschauen
|
||||||
|
- **unterstützt alle Template-Typen** — full_screen, message_wall, screen_specific
|
||||||
|
- **haelt die Prioritaetsregel transparent** — Admin sieht, welche Kampagne welche Screens uebersteuert
|
||||||
|
- **ist zukunftssicher** — Datenmodell skaliert mit neuen Template-Typen
|
||||||
305
docs/WATCHDOG-KONZEPT.md
Normal file
305
docs/WATCHDOG-KONZEPT.md
Normal file
|
|
@ -0,0 +1,305 @@
|
||||||
|
# Info-Board Neu - Watchdog-Konzept
|
||||||
|
|
||||||
|
## Ziel
|
||||||
|
|
||||||
|
Der Watchdog ueberwacht die kritischen Komponenten des Players und sorgt dafuer, dass der Display-Betrieb bei Abstuerzen oder Verhaengungen automatisch wiederhergestellt wird.
|
||||||
|
|
||||||
|
Die Ueberwachung erfolgt auf zwei Ebenen:
|
||||||
|
|
||||||
|
1. **Browser-Watchdog** — Ueberwachung von Chromium
|
||||||
|
2. **Agent-Watchdog** — Ueberwachung des Player-Agents
|
||||||
|
|
||||||
|
## Grundprinzipien
|
||||||
|
|
||||||
|
- Watchdogs sind extern und unabhaengig von den ueberwachten Prozessen
|
||||||
|
- Erkennung erfolgt aktiv durch Health-Checks, nicht durch Liveness-Pings
|
||||||
|
- Restart-Strategien sind progressiv und vermeiden Restart-Schleifen
|
||||||
|
- Logging ist strukturiert und fuer Admin-Diagnosen aussagekraeftig
|
||||||
|
|
||||||
|
## Browser-Watchdog (Chromium-Ueberwachung)
|
||||||
|
|
||||||
|
### Aufgaben
|
||||||
|
|
||||||
|
Der Browser-Watchdog sorgt dafuer, dass:
|
||||||
|
|
||||||
|
- Chromium staendig laeuft und antwortet
|
||||||
|
- der Renderer nicht in einer Endlosschleife haengt
|
||||||
|
- Rendering-Fehler nicht zu permanenten Schwarzbildern fuehren
|
||||||
|
- bei Chromium-Crash oder Verhaengung schnell neugestartet wird
|
||||||
|
|
||||||
|
### Health-Check-Verfahren
|
||||||
|
|
||||||
|
Der Watchdog fuehrt regelmaeßig folgende Checks durch:
|
||||||
|
|
||||||
|
#### 1. Prozess-Check
|
||||||
|
|
||||||
|
```
|
||||||
|
Existiert der Chromium-Prozess noch?
|
||||||
|
- lsof oder ps-Abfrage auf die PID
|
||||||
|
- Timeout: sofort bei fehlender PID
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. HTTP-Health-Check auf localhost
|
||||||
|
|
||||||
|
```
|
||||||
|
GET http://localhost:8081/health
|
||||||
|
Timeout: 5 Sekunden
|
||||||
|
Erwartet: 200 OK und JSON-Antwort {status: "ok"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Die `player-ui` muss einen einfachen `/health`-Endpunkt bereitstellen, der schnell antwortet, auch wenn die Playlist gerade verarbeitet wird.
|
||||||
|
|
||||||
|
#### 3. Rendering-Verifizierung (optional, Phase 2)
|
||||||
|
|
||||||
|
```
|
||||||
|
Screenshot-basiert erkennen, ob der Browser:
|
||||||
|
- Fehlerseite zeigt
|
||||||
|
- komplett schwarz ist (mehr als 95% schwarze Pixel)
|
||||||
|
- seit mehreren Minuten denselben Content zeigt, obwohl ein Wechsel erwartet wurde
|
||||||
|
```
|
||||||
|
|
||||||
|
Diese Methode ist fuer v1 optional, wird aber fuer spaetere Verhaengungserkennung eingeplant.
|
||||||
|
|
||||||
|
### Ueberwachungs-Intervall
|
||||||
|
|
||||||
|
- Health-Check alle **30 Sekunden**
|
||||||
|
- Bei Fehler: sofort Neustart pruefen (kein Warten auf naechsten Zyklus)
|
||||||
|
|
||||||
|
### Restart-Strategie
|
||||||
|
|
||||||
|
#### Strategie: Exponentieller Backoff mit Maximum
|
||||||
|
|
||||||
|
```
|
||||||
|
Fehlerfall:
|
||||||
|
Fehler 1: Sofort neustart (Wait 0s)
|
||||||
|
Fehler 2: Warte 2s, versuche Restart
|
||||||
|
Fehler 3: Warte 5s, versuche Restart
|
||||||
|
Fehler 4: Warte 10s, versuche Restart
|
||||||
|
Fehler 5+: Warte 30s, versuche Restart
|
||||||
|
Nach 10 aufeinanderfolgende Fehler ohne erfolgreicher Recovery:
|
||||||
|
- Alert an Admin (via Server-Status)
|
||||||
|
- Overlay auf "Error" setzen
|
||||||
|
- Watchdog-Loop verlangsamen auf 5 Min Intervall
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Erfolg-Kriterium
|
||||||
|
|
||||||
|
Wenn der Health-Check 3x hintereinander erfolgreich ist:
|
||||||
|
|
||||||
|
- Backoff-Zaehler zuruecksetzen auf 0
|
||||||
|
- naechstes Fehler wieder mit sofort-Restart starten
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
Jeder Watchdog-Ereignis wird protokolliert:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ts": "2025-03-23T14:22:15Z",
|
||||||
|
"component": "browser_watchdog",
|
||||||
|
"event": "restart",
|
||||||
|
"reason": "health_check_timeout",
|
||||||
|
"attempt": 2,
|
||||||
|
"next_retry_in_ms": 5000,
|
||||||
|
"details": {
|
||||||
|
"pid_before": 1234,
|
||||||
|
"pid_after": 1245,
|
||||||
|
"http_status_before": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Logging-Ziele:
|
||||||
|
|
||||||
|
- strukturiert auf stdout/stderr (JSON)
|
||||||
|
- lokal in `/var/log/signage/watchdog.log` mit Rotation
|
||||||
|
|
||||||
|
## Agent-Watchdog (systemd-Integration)
|
||||||
|
|
||||||
|
### Aufgaben
|
||||||
|
|
||||||
|
Der Agent-Watchdog (bzw. systemd-Unit) sorgt dafuer, dass:
|
||||||
|
|
||||||
|
- der Player-Agent staendig laeuft
|
||||||
|
- nach Crash oder gewolltem Stop schnell neugestartet wird
|
||||||
|
- Restart-Grenzen ein Verhaengungsloop verhindern
|
||||||
|
|
||||||
|
### systemd-Konfiguration
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/local/bin/player-agent
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
StartLimitInterval=300
|
||||||
|
StartLimitBurst=10
|
||||||
|
StandardOutput=journal
|
||||||
|
StandardError=journal
|
||||||
|
```
|
||||||
|
|
||||||
|
**Bedeutung:**
|
||||||
|
|
||||||
|
- `Restart=always` — Neustart bei jedem Exit (unabhaengig vom Exit-Code)
|
||||||
|
- `RestartSec=5` — Warte 5 Sekunden vor Neustart
|
||||||
|
- `StartLimitInterval=300` — Zaehle Restarts in einem 300s-Fenster
|
||||||
|
- `StartLimitBurst=10` — Mehr als 10 Restarts in 300s fuehrt zu systemd-Stop
|
||||||
|
|
||||||
|
Wenn `StartLimitBurst` erreicht wird:
|
||||||
|
|
||||||
|
- systemd laesst den Service stehen
|
||||||
|
- Admin wird informiert (Status-API setzt `agent_watchdog_failed`)
|
||||||
|
- manueller Eingriff oder Admin-Kommando noetig
|
||||||
|
|
||||||
|
### Health-Check durch Agent selbst
|
||||||
|
|
||||||
|
Der Agent sollte intern:
|
||||||
|
|
||||||
|
- Broker-Verbindung regelmaeßig pruefen
|
||||||
|
- Server-Sync-Status tracken
|
||||||
|
- bei kritischen Innenfehlern nicht einfach weiterlaeufen
|
||||||
|
|
||||||
|
Wenn sich der Agent selbst als unheilbar beschaedigt sieht:
|
||||||
|
|
||||||
|
- strukturiert mit Exit-Code `1` beenden (systemd startet neu)
|
||||||
|
- nicht mit `exit(0)` haengend beenden
|
||||||
|
|
||||||
|
## Verhaeltnis zu systemd
|
||||||
|
|
||||||
|
### Architektur-Entscheidung
|
||||||
|
|
||||||
|
`systemd` uebernimmt die Prozess-Wiederbelebung fuer den Agent.
|
||||||
|
|
||||||
|
Der Browser-Watchdog ist ein **separater, von systemd unabhaengiger Prozess**, weil:
|
||||||
|
|
||||||
|
- Chromium staendiger Ueberwachung bedarf (Health-Checks im 30s-Rhythmus)
|
||||||
|
- ein Systemd-Watchdog-Timer zu unverzeihlich waere (nur on/off, nicht granular)
|
||||||
|
- der Browser-Watchdog auch die Systemd-Unit selbst monitoren kann (Defensive Architektur)
|
||||||
|
|
||||||
|
### Optional: systemd WatchdogSec
|
||||||
|
|
||||||
|
Fuer den Agent ist es sinnvoll, auch systemd's Watchdog-Timer zu nutzen:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Service]
|
||||||
|
WatchdogSec=30
|
||||||
|
ExecStart=/usr/local/bin/player-agent
|
||||||
|
```
|
||||||
|
|
||||||
|
Der Agent muesste dann periodisch `systemd-notify --ready` senden.
|
||||||
|
|
||||||
|
Das ist **optional fuer v1**, wird aber fuer spaetere Robustheit eingeplant.
|
||||||
|
|
||||||
|
## Integration mit Player-Setup
|
||||||
|
|
||||||
|
### Verzeichnisstruktur
|
||||||
|
|
||||||
|
```
|
||||||
|
/usr/local/bin/
|
||||||
|
player-agent — Go-Binary
|
||||||
|
browser-watchdog — Go-Binary oder Shell-Script
|
||||||
|
|
||||||
|
/etc/systemd/system/
|
||||||
|
signage-agent.service
|
||||||
|
signage-browser-watchdog.service
|
||||||
|
|
||||||
|
/var/lib/signage/
|
||||||
|
watchdog-state.json — letzter Zustand, Backoff-Counter
|
||||||
|
|
||||||
|
/var/log/signage/
|
||||||
|
watchdog.log — strukturiertes Logging
|
||||||
|
```
|
||||||
|
|
||||||
|
### Startup-Reihenfolge
|
||||||
|
|
||||||
|
1. Basis-System bootet, X11 startet
|
||||||
|
2. `signage-agent.service` startet (systemd)
|
||||||
|
3. Agent startet, prueft Konfiguration, startet `player-ui` HTTP-Server
|
||||||
|
4. `signage-browser-watchdog.service` startet (systemd)
|
||||||
|
5. Watchdog wartet initial 10s, bevor erste Checks starten
|
||||||
|
6. Agent laesst Chromium starten
|
||||||
|
7. Watchdog beginnt Health-Checks
|
||||||
|
|
||||||
|
Dieses Ordering verhindert, dass der Watchdog versucht, den Browser zu uberwachen, bevor der Agent bereit ist.
|
||||||
|
|
||||||
|
### Stopp-Reihenfolge bei Shutdown
|
||||||
|
|
||||||
|
1. systemd sendet SIGTERM an Agent und Browser-Watchdog
|
||||||
|
2. Watchdog: beendet sich, versucht nicht zu restarten
|
||||||
|
3. Agent: beendet sich, laedt Chromium herunter
|
||||||
|
4. Systemd wartet auf Completion
|
||||||
|
|
||||||
|
## Fehlerklassifizierung und Admin-Reporting
|
||||||
|
|
||||||
|
### Fehlerklassen
|
||||||
|
|
||||||
|
| Fehlerklasse | Symptom | Watchdog-Aktion | Admin-Alert |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Prozess-Crash | PID weg | Sofort neustart | Nach 3x Fehlschlag |
|
||||||
|
| Health-Check-Timeout | HTTP timeout | Backoff-Restart | Nach 5x Fehlschlag |
|
||||||
|
| Rendering-Fehler | Browser zeigt Fehlerseite | Neustart | Sofort sichtbar |
|
||||||
|
| Backoff-Maximum | 10+ Fehler in 5min | Stoppen, Alert | Sofort |
|
||||||
|
| Agent-Unhealthy | Server-Sync fehlgeschlagen | Systemd-Neustart | Nach 3x Sync-Fehler |
|
||||||
|
|
||||||
|
### Admin-Oberflaeche
|
||||||
|
|
||||||
|
Status-Page und Admin-Dashboard zeigen:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"screen_id": "info01",
|
||||||
|
"browser_status": {
|
||||||
|
"pid": 1234,
|
||||||
|
"health": "ok",
|
||||||
|
"last_check_at": "2025-03-23T14:25:00Z",
|
||||||
|
"restart_count_5m": 0,
|
||||||
|
"last_error": null
|
||||||
|
},
|
||||||
|
"agent_status": {
|
||||||
|
"pid": 567,
|
||||||
|
"uptime_seconds": 3600,
|
||||||
|
"sync_status": "ok",
|
||||||
|
"last_sync_at": "2025-03-23T14:24:55Z",
|
||||||
|
"systemd_restart_count": 0
|
||||||
|
},
|
||||||
|
"watchdog_alert": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Konfigurierbare Parameter
|
||||||
|
|
||||||
|
In `/etc/signage/config.yml` oder Umgebungsvariablen:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
watchdog:
|
||||||
|
browser:
|
||||||
|
check_interval_sec: 30
|
||||||
|
health_check_timeout_sec: 5
|
||||||
|
restart_backoff_steps: [0, 2, 5, 10, 30] # Sekunden
|
||||||
|
max_consecutive_errors: 10
|
||||||
|
error_window_sec: 300
|
||||||
|
agent:
|
||||||
|
systemd_unit: "signage-agent.service"
|
||||||
|
healthcheck_timeout_sec: 10
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing und Validierung
|
||||||
|
|
||||||
|
Testfaelle fuer den Watchdog:
|
||||||
|
|
||||||
|
1. Chromium manuell toeten (`kill -9 PID`) — sollte innerhalb 30s neustartet werden
|
||||||
|
2. Player-Agent starten/stoppen — systemd sollte neustart triggern
|
||||||
|
3. Player-UI HTTP-Server abschalten — Browser-Watchdog sollte neustarten
|
||||||
|
4. Schnelle aufeinanderfolgende Crashes — Backoff-Exponentialfunktion pruefen
|
||||||
|
5. Admin-Kommando `restart_player` — geordneter Neustart, dann Restart-Counter nicht erhoeht
|
||||||
|
6. Watchdog-Logs auf Struktur und Vollstaendigkeit pruefen
|
||||||
|
|
||||||
|
## Zusammenfassung
|
||||||
|
|
||||||
|
Der Watchdog-Ansatz ist:
|
||||||
|
|
||||||
|
- **Transparent** — klare Logging und Admin-Sichtbarkeit
|
||||||
|
- **Progressive** — Backoff statt Restart-Schleife
|
||||||
|
- **Defensiv** — mehrere Erkennungsmethoden (Prozess, HTTP, optional Rendering)
|
||||||
|
- **Integriert** — arbeitet mit systemd zusammen, nicht gegen es
|
||||||
|
- **Skalierbar** — Verfahren gilt fuer alle Player unabhaengig von Standort oder Netzwerk
|
||||||
|
|
@ -16,6 +16,7 @@ import (
|
||||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/mqttheartbeat"
|
"git.az-it.net/az/morz-infoboard/player/agent/internal/mqttheartbeat"
|
||||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/mqttsubscriber"
|
"git.az-it.net/az/morz-infoboard/player/agent/internal/mqttsubscriber"
|
||||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/playerserver"
|
"git.az-it.net/az/morz-infoboard/player/agent/internal/playerserver"
|
||||||
|
"git.az-it.net/az/morz-infoboard/player/agent/internal/screenshot"
|
||||||
"git.az-it.net/az/morz-infoboard/player/agent/internal/statusreporter"
|
"git.az-it.net/az/morz-infoboard/player/agent/internal/statusreporter"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -222,6 +223,14 @@ func (a *App) Run(ctx context.Context) error {
|
||||||
// Start polling the backend for playlist updates (60 s fallback + MQTT trigger).
|
// Start polling the backend for playlist updates (60 s fallback + MQTT trigger).
|
||||||
go a.pollPlaylist(ctx)
|
go a.pollPlaylist(ctx)
|
||||||
|
|
||||||
|
// Phase 6: Periodische Screenshot-Erzeugung, wenn konfiguriert.
|
||||||
|
if a.Config.ScreenshotEvery > 0 {
|
||||||
|
ss := screenshot.New(a.Config.ScreenID, a.Config.ServerBaseURL, a.Config.ScreenshotEvery, a.logger)
|
||||||
|
go ss.Run(ctx)
|
||||||
|
a.logger.Printf("event=screenshot_enabled screen_id=%s interval_seconds=%d",
|
||||||
|
a.Config.ScreenID, a.Config.ScreenshotEvery)
|
||||||
|
}
|
||||||
|
|
||||||
a.emitHeartbeat()
|
a.emitHeartbeat()
|
||||||
a.mu.Lock()
|
a.mu.Lock()
|
||||||
a.status = StatusRunning
|
a.status = StatusRunning
|
||||||
|
|
@ -272,6 +281,10 @@ func (a *App) registerScreen(ctx context.Context) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
req.Header.Set("Content-Type", "application/json")
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
// K6: Register-Secret mitsenden, wenn konfiguriert.
|
||||||
|
if a.Config.RegisterSecret != "" {
|
||||||
|
req.Header.Set("X-Register-Secret", a.Config.RegisterSecret)
|
||||||
|
}
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req)
|
resp, err := http.DefaultClient.Do(req)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
|
|
|
||||||
|
|
@ -23,6 +23,13 @@ type Config struct {
|
||||||
PlayerListenAddr string `json:"player_listen_addr"`
|
PlayerListenAddr string `json:"player_listen_addr"`
|
||||||
// PlayerContentURL is a fallback URL shown when no playlist is available from the server.
|
// PlayerContentURL is a fallback URL shown when no playlist is available from the server.
|
||||||
PlayerContentURL string `json:"player_content_url"`
|
PlayerContentURL string `json:"player_content_url"`
|
||||||
|
// RegisterSecret ist das Pre-Shared-Secret für POST /api/v1/screens/register (K6).
|
||||||
|
// Muss mit MORZ_INFOBOARD_REGISTER_SECRET auf dem Server übereinstimmen.
|
||||||
|
// Wenn leer, wird kein Header gesendet (kompatibel mit Servern ohne Secret).
|
||||||
|
RegisterSecret string `json:"register_secret"`
|
||||||
|
// ScreenshotEvery gibt das Intervall in Sekunden für periodische Screenshots an (Phase 6).
|
||||||
|
// 0 oder negativ = Screenshots deaktiviert.
|
||||||
|
ScreenshotEvery int `json:"screenshot_every_seconds"`
|
||||||
}
|
}
|
||||||
|
|
||||||
const defaultConfigPath = "/etc/signage/config.json"
|
const defaultConfigPath = "/etc/signage/config.json"
|
||||||
|
|
@ -90,6 +97,12 @@ func overrideFromEnv(cfg *Config) {
|
||||||
cfg.ScreenName = getenv("MORZ_INFOBOARD_SCREEN_NAME", cfg.ScreenName)
|
cfg.ScreenName = getenv("MORZ_INFOBOARD_SCREEN_NAME", cfg.ScreenName)
|
||||||
cfg.ScreenOrientation = getenv("MORZ_INFOBOARD_SCREEN_ORIENTATION", cfg.ScreenOrientation)
|
cfg.ScreenOrientation = getenv("MORZ_INFOBOARD_SCREEN_ORIENTATION", cfg.ScreenOrientation)
|
||||||
cfg.PlayerContentURL = getenv("MORZ_INFOBOARD_PLAYER_CONTENT_URL", cfg.PlayerContentURL)
|
cfg.PlayerContentURL = getenv("MORZ_INFOBOARD_PLAYER_CONTENT_URL", cfg.PlayerContentURL)
|
||||||
|
cfg.RegisterSecret = getenv("MORZ_INFOBOARD_REGISTER_SECRET", cfg.RegisterSecret)
|
||||||
|
if value := getenv("MORZ_INFOBOARD_SCREENSHOT_EVERY", ""); value != "" {
|
||||||
|
var parsed int
|
||||||
|
_, _ = fmt.Sscanf(value, "%d", &parsed)
|
||||||
|
cfg.ScreenshotEvery = parsed
|
||||||
|
}
|
||||||
if value := getenv("MORZ_INFOBOARD_STATUS_REPORT_EVERY", ""); value != "" {
|
if value := getenv("MORZ_INFOBOARD_STATUS_REPORT_EVERY", ""); value != "" {
|
||||||
var parsed int
|
var parsed int
|
||||||
_, _ = fmt.Sscanf(value, "%d", &parsed)
|
_, _ = fmt.Sscanf(value, "%d", &parsed)
|
||||||
|
|
|
||||||
22
player/agent/internal/playerserver/assets/pdf.min.js
vendored
Normal file
22
player/agent/internal/playerserver/assets/pdf.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
22
player/agent/internal/playerserver/assets/pdf.worker.min.js
vendored
Normal file
22
player/agent/internal/playerserver/assets/pdf.worker.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
|
|
@ -208,6 +208,13 @@ const playerHTML = `<!DOCTYPE html>
|
||||||
opacity: 0;
|
opacity: 0;
|
||||||
transition: opacity 0.5s ease;
|
transition: opacity 0.5s ease;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* PDF.js Canvas */
|
||||||
|
#pdf-canvas {
|
||||||
|
position: fixed; inset: 0;
|
||||||
|
width: 100%; height: 100%;
|
||||||
|
display: none; background: #000; z-index: 10;
|
||||||
|
}
|
||||||
#img-view {
|
#img-view {
|
||||||
object-fit: contain;
|
object-fit: contain;
|
||||||
background: #000;
|
background: #000;
|
||||||
|
|
@ -252,22 +259,34 @@ const playerHTML = `<!DOCTYPE html>
|
||||||
<iframe id="frame" allow="autoplay; fullscreen" allowfullscreen></iframe>
|
<iframe id="frame" allow="autoplay; fullscreen" allowfullscreen></iframe>
|
||||||
<img id="img-view" alt="">
|
<img id="img-view" alt="">
|
||||||
<video id="video-view" autoplay muted playsinline></video>
|
<video id="video-view" autoplay muted playsinline></video>
|
||||||
|
<canvas id="pdf-canvas"></canvas>
|
||||||
<div id="frame-error">
|
<div id="frame-error">
|
||||||
<span class="error-title" id="frame-error-title"></span>
|
<span class="error-title" id="frame-error-title"></span>
|
||||||
<span class="error-hint">Seite kann nicht eingebettet werden</span>
|
<span class="error-hint">Seite kann nicht eingebettet werden</span>
|
||||||
</div>
|
</div>
|
||||||
<div id="dot"></div>
|
<div id="dot"></div>
|
||||||
|
|
||||||
|
<script src="/assets/pdf.min.js"></script>
|
||||||
<script>
|
<script>
|
||||||
var splash = document.getElementById('splash');
|
var splash = document.getElementById('splash');
|
||||||
var overlay = document.getElementById('info-overlay');
|
var overlay = document.getElementById('info-overlay');
|
||||||
var frame = document.getElementById('frame');
|
var frame = document.getElementById('frame');
|
||||||
var imgView = document.getElementById('img-view');
|
var imgView = document.getElementById('img-view');
|
||||||
var videoView = document.getElementById('video-view');
|
var videoView = document.getElementById('video-view');
|
||||||
|
var pdfCanvas = document.getElementById('pdf-canvas');
|
||||||
var frameError = document.getElementById('frame-error');
|
var frameError = document.getElementById('frame-error');
|
||||||
var frameErrorTitle = document.getElementById('frame-error-title');
|
var frameErrorTitle = document.getElementById('frame-error-title');
|
||||||
var dot = document.getElementById('dot');
|
var dot = document.getElementById('dot');
|
||||||
|
|
||||||
|
// PDF.js Worker konfigurieren
|
||||||
|
if (typeof pdfjsLib !== 'undefined') {
|
||||||
|
pdfjsLib.GlobalWorkerOptions.workerSrc = '/assets/pdf.worker.min.js';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aktuell laufende PDF-Render-Session; wird genutzt um veraltete Sessions
|
||||||
|
// abzubrechen wenn hideAllContent() aufgerufen wird.
|
||||||
|
var pdfSession = null;
|
||||||
|
|
||||||
// ── Splash-Orientierung ───────────────────────────────────────────
|
// ── Splash-Orientierung ───────────────────────────────────────────
|
||||||
function updateSplash() {
|
function updateSplash() {
|
||||||
var portrait = window.innerHeight > window.innerWidth;
|
var portrait = window.innerHeight > window.innerWidth;
|
||||||
|
|
@ -349,6 +368,10 @@ const playerHTML = `<!DOCTYPE html>
|
||||||
videoView.pause();
|
videoView.pause();
|
||||||
videoView.src = '';
|
videoView.src = '';
|
||||||
|
|
||||||
|
// Laufende PDF-Session abbrechen.
|
||||||
|
pdfSession = null;
|
||||||
|
pdfCanvas.style.display = 'none';
|
||||||
|
|
||||||
[frame, imgView, videoView].forEach(function(el) {
|
[frame, imgView, videoView].forEach(function(el) {
|
||||||
if (el.style.display !== 'none') {
|
if (el.style.display !== 'none') {
|
||||||
el.style.opacity = '0';
|
el.style.opacity = '0';
|
||||||
|
|
@ -433,29 +456,12 @@ const playerHTML = `<!DOCTYPE html>
|
||||||
rotateTimer = setTimeout(advanceOnce, ms);
|
rotateTimer = setTimeout(advanceOnce, ms);
|
||||||
videoView.onended = advanceOnce;
|
videoView.onended = advanceOnce;
|
||||||
|
|
||||||
|
} else if (type === 'pdf') {
|
||||||
|
showPdf(item);
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
// type === 'web', 'pdf' oder unbekannt → iframe
|
// type === 'web' oder unbekannt → iframe
|
||||||
if (type === 'pdf') {
|
if (frame.src !== item.src) { frame.src = item.src; }
|
||||||
frame.src = (function pdfUrl(src) {
|
|
||||||
var defaults = {toolbar: '0', navpanes: '0', scrollbar: '0', view: 'Fit', page: '1'};
|
|
||||||
var hashIdx = src.indexOf('#');
|
|
||||||
var base = hashIdx >= 0 ? src.substring(0, hashIdx) : src;
|
|
||||||
var existing = hashIdx >= 0 ? src.substring(hashIdx + 1) : '';
|
|
||||||
var params = {};
|
|
||||||
existing.split('&').forEach(function(p) {
|
|
||||||
var kv = p.split('=');
|
|
||||||
if (kv[0]) params[kv[0]] = kv[1] || '';
|
|
||||||
});
|
|
||||||
for (var k in defaults) {
|
|
||||||
if (!(k in params)) params[k] = defaults[k];
|
|
||||||
}
|
|
||||||
var parts = [];
|
|
||||||
for (var k in params) parts.push(k + '=' + params[k]);
|
|
||||||
return base + '#' + parts.join('&');
|
|
||||||
})(item.src);
|
|
||||||
} else {
|
|
||||||
if (frame.src !== item.src) { frame.src = item.src; }
|
|
||||||
}
|
|
||||||
frame.style.display = 'block';
|
frame.style.display = 'block';
|
||||||
requestAnimationFrame(function() {
|
requestAnimationFrame(function() {
|
||||||
requestAnimationFrame(function() { frame.style.opacity = '1'; });
|
requestAnimationFrame(function() { frame.style.opacity = '1'; });
|
||||||
|
|
@ -486,6 +492,83 @@ const playerHTML = `<!DOCTYPE html>
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── PDF.js Seitendurchblättern ────────────────────────────────────
|
||||||
|
function showPdf(item) {
|
||||||
|
if (typeof pdfjsLib === 'undefined') {
|
||||||
|
// PDF.js nicht verfügbar → Fehler anzeigen
|
||||||
|
showFrameError(item);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Graceful-Fallback-Timeout: falls PDF nicht innerhalb von 8s lädt → Fehler
|
||||||
|
var loadTimeout = setTimeout(function() {
|
||||||
|
if (pdfSession === session) {
|
||||||
|
showFrameError(item);
|
||||||
|
}
|
||||||
|
}, 8000);
|
||||||
|
|
||||||
|
// Neue Session starten; alte wird durch pdfSession-Check invalidiert
|
||||||
|
var session = {};
|
||||||
|
pdfSession = session;
|
||||||
|
|
||||||
|
pdfCanvas.style.display = 'block';
|
||||||
|
|
||||||
|
pdfjsLib.getDocument(item.src).promise.then(function(pdf) {
|
||||||
|
clearTimeout(loadTimeout);
|
||||||
|
|
||||||
|
// Session bereits abgebrochen?
|
||||||
|
if (pdfSession !== session) { return; }
|
||||||
|
|
||||||
|
var numPages = pdf.numPages;
|
||||||
|
var secsPerPage = Math.max(2, Math.floor((item.duration_seconds || 20) / numPages));
|
||||||
|
var pageNum = 1;
|
||||||
|
|
||||||
|
function renderPage(n) {
|
||||||
|
if (pdfSession !== session) { return; } // Session abgebrochen
|
||||||
|
|
||||||
|
pdf.getPage(n).then(function(page) {
|
||||||
|
if (pdfSession !== session) { return; }
|
||||||
|
|
||||||
|
var baseViewport = page.getViewport({ scale: 1.0 });
|
||||||
|
var scale = window.innerWidth / baseViewport.width;
|
||||||
|
// Auch Höhe berücksichtigen damit die Seite vollständig sichtbar bleibt
|
||||||
|
var scaleH = window.innerHeight / baseViewport.height;
|
||||||
|
if (scaleH < scale) { scale = scaleH; }
|
||||||
|
var viewport = page.getViewport({ scale: scale });
|
||||||
|
|
||||||
|
pdfCanvas.width = viewport.width;
|
||||||
|
pdfCanvas.height = viewport.height;
|
||||||
|
|
||||||
|
var ctx = pdfCanvas.getContext('2d');
|
||||||
|
page.render({ canvasContext: ctx, viewport: viewport }).promise.then(function() {
|
||||||
|
if (pdfSession !== session) { return; }
|
||||||
|
|
||||||
|
// Nach secsPerPage Sekunden zur nächsten Seite
|
||||||
|
rotateTimer = setTimeout(function() {
|
||||||
|
if (pdfSession !== session) { return; }
|
||||||
|
if (n < numPages) {
|
||||||
|
renderPage(n + 1);
|
||||||
|
} else {
|
||||||
|
// Alle Seiten gezeigt → normale Rotation fortsetzen
|
||||||
|
currentIdx = (currentIdx + 1) % items.length;
|
||||||
|
showItem(items[currentIdx]);
|
||||||
|
}
|
||||||
|
}, secsPerPage * 1000);
|
||||||
|
}).catch(function() {
|
||||||
|
if (pdfSession === session) { showFrameError(item); }
|
||||||
|
});
|
||||||
|
}).catch(function() {
|
||||||
|
if (pdfSession === session) { showFrameError(item); }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
renderPage(pageNum);
|
||||||
|
}).catch(function() {
|
||||||
|
clearTimeout(loadTimeout);
|
||||||
|
if (pdfSession === session) { showFrameError(item); }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
function showFrameError(item) {
|
function showFrameError(item) {
|
||||||
hideAllContent();
|
hideAllContent();
|
||||||
overlay.style.display = 'none';
|
overlay.style.display = 'none';
|
||||||
|
|
|
||||||
210
player/agent/internal/screenshot/screenshot.go
Normal file
210
player/agent/internal/screenshot/screenshot.go
Normal file
|
|
@ -0,0 +1,210 @@
|
||||||
|
// Package screenshot erzeugt periodisch Screenshots des aktuell angezeigten Inhalts
|
||||||
|
// und sendet sie an den Backend-Server (Phase 6).
|
||||||
|
//
|
||||||
|
// Strategie (in dieser Reihenfolge):
|
||||||
|
// 1. scrot -z -q 60 /tmp/morz-screenshot.jpg — leichtgewichtig, für X11
|
||||||
|
// 2. import -window root /tmp/morz-screenshot.png — ImageMagick, falls scrot fehlt
|
||||||
|
// 3. xwd -root -silent | convert xwd:- /tmp/morz-screenshot.jpg — Fallback
|
||||||
|
//
|
||||||
|
// Der Screenshot wird per HTTP MULTIPART POST an
|
||||||
|
// POST /api/v1/player/screenshot gesendet.
|
||||||
|
package screenshot
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"mime/multipart"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
screenshotPath = "/tmp/morz-screenshot.jpg"
|
||||||
|
defaultInterval = 60 * time.Second
|
||||||
|
uploadTimeout = 15 * time.Second
|
||||||
|
screenshotQuality = "60" // JPEG quality (0-100)
|
||||||
|
)
|
||||||
|
|
||||||
|
// Screenshotter erzeugt periodisch Screenshots und sendet sie an den Server.
|
||||||
|
type Screenshotter struct {
|
||||||
|
screenID string
|
||||||
|
serverBaseURL string
|
||||||
|
interval time.Duration
|
||||||
|
logger *log.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// New erzeugt einen neuen Screenshotter.
|
||||||
|
func New(screenID, serverBaseURL string, intervalSeconds int, logger *log.Logger) *Screenshotter {
|
||||||
|
interval := defaultInterval
|
||||||
|
if intervalSeconds > 0 {
|
||||||
|
interval = time.Duration(intervalSeconds) * time.Second
|
||||||
|
}
|
||||||
|
if logger == nil {
|
||||||
|
logger = log.New(os.Stdout, "screenshot ", log.LstdFlags|log.LUTC)
|
||||||
|
}
|
||||||
|
return &Screenshotter{
|
||||||
|
screenID: screenID,
|
||||||
|
serverBaseURL: serverBaseURL,
|
||||||
|
interval: interval,
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run startet die periodische Screenshot-Schleife und blockiert bis ctx abgebrochen wird.
|
||||||
|
func (s *Screenshotter) Run(ctx context.Context) {
|
||||||
|
ticker := time.NewTicker(s.interval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
// Erster Screenshot nach kurzem Delay (damit Chromium hochgefahren ist).
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-time.After(10 * time.Second):
|
||||||
|
}
|
||||||
|
s.takeAndSend(ctx)
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
s.takeAndSend(ctx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// takeAndSend erzeugt einen Screenshot und sendet ihn an den Server.
|
||||||
|
func (s *Screenshotter) takeAndSend(ctx context.Context) {
|
||||||
|
path, err := s.capture()
|
||||||
|
if err != nil {
|
||||||
|
s.logger.Printf("event=screenshot_capture_failed screen_id=%s err=%v", s.screenID, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer os.Remove(path) //nolint:errcheck
|
||||||
|
|
||||||
|
if err := s.upload(ctx, path); err != nil {
|
||||||
|
s.logger.Printf("event=screenshot_upload_failed screen_id=%s err=%v", s.screenID, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
s.logger.Printf("event=screenshot_sent screen_id=%s", s.screenID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// capture erzeugt einen Screenshot mit dem ersten verfügbaren Tool.
|
||||||
|
func (s *Screenshotter) capture() (string, error) {
|
||||||
|
// Aufräumen falls eine alte Datei existiert.
|
||||||
|
os.Remove(screenshotPath) //nolint:errcheck
|
||||||
|
|
||||||
|
// Versuch 1: scrot (leichtgewichtig, für X11)
|
||||||
|
if path, err := tryScrot(); err == nil {
|
||||||
|
return path, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Versuch 2: import (ImageMagick)
|
||||||
|
if path, err := tryImport(); err == nil {
|
||||||
|
return path, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Versuch 3: xwd + convert
|
||||||
|
if path, err := tryXwd(); err == nil {
|
||||||
|
return path, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return "", fmt.Errorf("keine Screenshot-Tool verfügbar (scrot, import, xwd)")
|
||||||
|
}
|
||||||
|
|
||||||
|
func tryScrot() (string, error) {
|
||||||
|
cmd := exec.Command("scrot", "-z", "-q", screenshotQuality, screenshotPath)
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return screenshotPath, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func tryImport() (string, error) {
|
||||||
|
// ImageMagick import: -window root macht einen Screenshot des gesamten X-Displays.
|
||||||
|
pngPath := "/tmp/morz-screenshot-tmp.png"
|
||||||
|
cmd := exec.Command("import", "-window", "root", pngPath)
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
// Zu JPEG konvertieren.
|
||||||
|
cmd = exec.Command("convert", pngPath, "-quality", screenshotQuality, screenshotPath)
|
||||||
|
defer os.Remove(pngPath) //nolint:errcheck
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return screenshotPath, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func tryXwd() (string, error) {
|
||||||
|
xwdPath := "/tmp/morz-screenshot-tmp.xwd"
|
||||||
|
// xwd schreibt in Datei.
|
||||||
|
xwdCmd := exec.Command("xwd", "-root", "-silent", "-out", xwdPath)
|
||||||
|
if err := xwdCmd.Run(); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer os.Remove(xwdPath) //nolint:errcheck
|
||||||
|
// convert xwd -> jpg.
|
||||||
|
cmd := exec.Command("convert", "xwd:"+xwdPath, "-quality", screenshotQuality, screenshotPath)
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return screenshotPath, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// upload sendet den Screenshot per MULTIPART POST an den Server.
|
||||||
|
func (s *Screenshotter) upload(ctx context.Context, path string) error {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read screenshot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var body bytes.Buffer
|
||||||
|
writer := multipart.NewWriter(&body)
|
||||||
|
_ = writer.WriteField("screen_id", s.screenID)
|
||||||
|
|
||||||
|
ext := filepath.Ext(path)
|
||||||
|
mimeType := "image/jpeg"
|
||||||
|
if ext == ".png" {
|
||||||
|
mimeType = "image/png"
|
||||||
|
}
|
||||||
|
|
||||||
|
fw, err := writer.CreateFormFile("screenshot", "screenshot"+ext)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("create form file: %w", err)
|
||||||
|
}
|
||||||
|
if _, err := fw.Write(data); err != nil {
|
||||||
|
return fmt.Errorf("write form file: %w", err)
|
||||||
|
}
|
||||||
|
_ = writer.WriteField("mime_type", mimeType)
|
||||||
|
writer.Close()
|
||||||
|
|
||||||
|
uploadCtx, cancel := context.WithTimeout(ctx, uploadTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
req, err := http.NewRequestWithContext(uploadCtx,
|
||||||
|
http.MethodPost,
|
||||||
|
s.serverBaseURL+"/api/v1/player/screenshot",
|
||||||
|
&body,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header.Set("Content-Type", writer.FormDataContentType())
|
||||||
|
|
||||||
|
resp, err := http.DefaultClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
if resp.StatusCode >= 400 {
|
||||||
|
return fmt.Errorf("server returned %d", resp.StatusCode)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
@ -2,22 +2,31 @@ package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"log"
|
"log"
|
||||||
|
"log/slog"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/app"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/app"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
logger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
// V6: Strukturiertes JSON-Logging als Standard-Logger.
|
||||||
|
// Alle slog.Info/slog.Error-Aufrufe im Programm nutzen diesen Handler.
|
||||||
|
slogHandler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
|
||||||
|
Level: slog.LevelInfo,
|
||||||
|
})
|
||||||
|
slog.SetDefault(slog.New(slogHandler))
|
||||||
|
|
||||||
|
// Kompatibilitäts-Logger für Komponenten die noch *log.Logger erwarten.
|
||||||
|
stdLogger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
||||||
|
|
||||||
application, err := app.New()
|
application, err := app.New()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logger.Fatalf("init app: %v", err)
|
stdLogger.Fatalf("init app: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.Printf("starting backend on %s", application.Config.HTTPAddress)
|
slog.Info("backend starting", "addr", application.Config.HTTPAddress)
|
||||||
|
|
||||||
if err := application.Run(); err != nil {
|
if err := application.Run(); err != nil {
|
||||||
logger.Fatalf("run backend: %v", err)
|
stdLogger.Fatalf("run backend: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -6,8 +6,11 @@ import (
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"errors"
|
"errors"
|
||||||
"log"
|
"log"
|
||||||
|
"log/slog"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
|
"os/signal"
|
||||||
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/config"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/config"
|
||||||
|
|
@ -22,11 +25,13 @@ type App struct {
|
||||||
server *http.Server
|
server *http.Server
|
||||||
notifier *mqttnotifier.Notifier
|
notifier *mqttnotifier.Notifier
|
||||||
authStore *store.AuthStore
|
authStore *store.AuthStore
|
||||||
|
dbPool *db.Pool // V7: für db.Close() im Shutdown
|
||||||
logger *log.Logger
|
logger *log.Logger
|
||||||
}
|
}
|
||||||
|
|
||||||
func New() (*App, error) {
|
func New() (*App, error) {
|
||||||
cfg := config.Load()
|
cfg := config.Load()
|
||||||
|
// Kompatibilitäts-Logger für db.Connect (erwartet *log.Logger).
|
||||||
logger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
logger := log.New(os.Stdout, "backend ", log.LstdFlags|log.LUTC)
|
||||||
|
|
||||||
// Ensure upload directory exists.
|
// Ensure upload directory exists.
|
||||||
|
|
@ -63,19 +68,20 @@ func New() (*App, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
adminPassword = hex.EncodeToString(buf)
|
adminPassword = hex.EncodeToString(buf)
|
||||||
logger.Printf("event=admin_password_generated password=%s", adminPassword)
|
// V6: slog statt log.Printf — Passwort nie loggen (K5).
|
||||||
|
slog.Info("admin password generated", "event", "admin_password_generated", "password", "[gesetzt]")
|
||||||
}
|
}
|
||||||
if err := authStore.EnsureAdminUser(context.Background(), cfg.DefaultTenantSlug, adminPassword); err != nil {
|
if err := authStore.EnsureAdminUser(context.Background(), cfg.DefaultTenantSlug, adminPassword); err != nil {
|
||||||
logger.Printf("event=ensure_admin_user_failed err=%v", err)
|
slog.Error("ensure admin user failed", "event", "ensure_admin_user_failed", "err", err)
|
||||||
// Non-fatal: server starts even if admin setup fails.
|
// Non-fatal: server starts even if admin setup fails.
|
||||||
}
|
}
|
||||||
|
|
||||||
// MQTT notifier (no-op when broker not configured).
|
// MQTT notifier (no-op when broker not configured).
|
||||||
notifier := mqttnotifier.New(cfg.MQTTBroker, cfg.MQTTUsername, cfg.MQTTPassword)
|
notifier := mqttnotifier.New(cfg.MQTTBroker, cfg.MQTTUsername, cfg.MQTTPassword)
|
||||||
if cfg.MQTTBroker != "" {
|
if cfg.MQTTBroker != "" {
|
||||||
logger.Printf("event=mqtt_notifier_enabled broker=%s", cfg.MQTTBroker)
|
slog.Info("mqtt notifier enabled", "event", "mqtt_notifier_enabled", "broker", cfg.MQTTBroker)
|
||||||
} else {
|
} else {
|
||||||
logger.Printf("event=mqtt_notifier_disabled reason=no_broker_configured")
|
slog.Info("mqtt notifier disabled", "event", "mqtt_notifier_disabled", "reason", "no_broker_configured")
|
||||||
}
|
}
|
||||||
|
|
||||||
handler := httpapi.NewRouter(httpapi.RouterDeps{
|
handler := httpapi.NewRouter(httpapi.RouterDeps{
|
||||||
|
|
@ -96,6 +102,7 @@ func New() (*App, error) {
|
||||||
server: &http.Server{Addr: cfg.HTTPAddress, Handler: handler},
|
server: &http.Server{Addr: cfg.HTTPAddress, Handler: handler},
|
||||||
notifier: notifier,
|
notifier: notifier,
|
||||||
authStore: authStore,
|
authStore: authStore,
|
||||||
|
dbPool: pool, // V7: Referenz für Shutdown
|
||||||
logger: logger,
|
logger: logger,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
@ -103,9 +110,12 @@ func New() (*App, error) {
|
||||||
func (a *App) Run() error {
|
func (a *App) Run() error {
|
||||||
defer a.notifier.Close()
|
defer a.notifier.Close()
|
||||||
|
|
||||||
// Session-Cleanup: expired sessions werden stündlich aus der DB entfernt.
|
// W2+V7: Graceful Shutdown mit Signal-Handling.
|
||||||
|
// Der Context wird bei SIGTERM/SIGINT abgebrochen, was den Shutdown einleitet.
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
// Session-Cleanup: expired sessions werden stündlich aus der DB entfernt.
|
||||||
go func() {
|
go func() {
|
||||||
ticker := time.NewTicker(1 * time.Hour)
|
ticker := time.NewTicker(1 * time.Hour)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
|
|
@ -113,9 +123,9 @@ func (a *App) Run() error {
|
||||||
select {
|
select {
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
if err := a.authStore.CleanExpiredSessions(ctx); err != nil {
|
if err := a.authStore.CleanExpiredSessions(ctx); err != nil {
|
||||||
a.logger.Printf("event=session_cleanup_failed err=%v", err)
|
slog.Error("session cleanup failed", "event", "session_cleanup_failed", "err", err)
|
||||||
} else {
|
} else {
|
||||||
a.logger.Printf("event=session_cleanup_ok")
|
slog.Info("session cleanup ok", "event", "session_cleanup_ok")
|
||||||
}
|
}
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return
|
return
|
||||||
|
|
@ -123,6 +133,26 @@ func (a *App) Run() error {
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
|
// W2: Signal-Handler für Graceful Shutdown.
|
||||||
|
sigCh := make(chan os.Signal, 1)
|
||||||
|
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGINT)
|
||||||
|
go func() {
|
||||||
|
sig := <-sigCh
|
||||||
|
slog.Info("shutdown signal received", "event", "shutdown_signal", "signal", sig.String())
|
||||||
|
cancel() // Session-Cleanup stoppen.
|
||||||
|
|
||||||
|
// HTTP-Server mit Timeout herunterfahren.
|
||||||
|
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||||
|
defer shutdownCancel()
|
||||||
|
if err := a.server.Shutdown(shutdownCtx); err != nil {
|
||||||
|
slog.Error("shutdown error", "event", "shutdown_error", "err", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// V7: DB-Pool schließen.
|
||||||
|
a.dbPool.Close()
|
||||||
|
slog.Info("shutdown complete", "event", "shutdown_complete")
|
||||||
|
}()
|
||||||
|
|
||||||
err := a.server.ListenAndServe()
|
err := a.server.ListenAndServe()
|
||||||
if errors.Is(err, http.ErrServerClosed) {
|
if errors.Is(err, http.ErrServerClosed) {
|
||||||
return nil
|
return nil
|
||||||
|
|
|
||||||
|
|
@ -15,6 +15,10 @@ type Config struct {
|
||||||
AdminPassword string // MORZ_INFOBOARD_ADMIN_PASSWORD
|
AdminPassword string // MORZ_INFOBOARD_ADMIN_PASSWORD
|
||||||
DefaultTenantSlug string // MORZ_INFOBOARD_DEFAULT_TENANT (default: "morz")
|
DefaultTenantSlug string // MORZ_INFOBOARD_DEFAULT_TENANT (default: "morz")
|
||||||
DevMode bool // MORZ_INFOBOARD_DEV_MODE — when true, session cookie works without HTTPS
|
DevMode bool // MORZ_INFOBOARD_DEV_MODE — when true, session cookie works without HTTPS
|
||||||
|
// RegisterSecret schützt POST /api/v1/screens/register (K6).
|
||||||
|
// Wenn gesetzt, muss der Player den Header X-Register-Secret: <secret> senden.
|
||||||
|
// Wenn leer, ist der Endpoint für alle erreichbar (Rückwärtskompatibilität).
|
||||||
|
RegisterSecret string // MORZ_INFOBOARD_REGISTER_SECRET
|
||||||
}
|
}
|
||||||
|
|
||||||
func Load() Config {
|
func Load() Config {
|
||||||
|
|
@ -29,6 +33,7 @@ func Load() Config {
|
||||||
AdminPassword: os.Getenv("MORZ_INFOBOARD_ADMIN_PASSWORD"),
|
AdminPassword: os.Getenv("MORZ_INFOBOARD_ADMIN_PASSWORD"),
|
||||||
DefaultTenantSlug: getenv("MORZ_INFOBOARD_DEFAULT_TENANT", "morz"),
|
DefaultTenantSlug: getenv("MORZ_INFOBOARD_DEFAULT_TENANT", "morz"),
|
||||||
DevMode: os.Getenv("MORZ_INFOBOARD_DEV_MODE") == "true",
|
DevMode: os.Getenv("MORZ_INFOBOARD_DEV_MODE") == "true",
|
||||||
|
RegisterSecret: os.Getenv("MORZ_INFOBOARD_REGISTER_SECRET"),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
68
server/backend/internal/fileutil/fileutil.go
Normal file
68
server/backend/internal/fileutil/fileutil.go
Normal file
|
|
@ -0,0 +1,68 @@
|
||||||
|
// Package fileutil enthält gemeinsame Datei-Hilfsfunktionen für Upload-Handler (V1, N6).
|
||||||
|
package fileutil
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SaveUploadedFile speichert einen Datei-Stream in uploadDir/{tenantSlug}/ und
|
||||||
|
// gibt den relativen HTTP-Pfad (/uploads/{tenantSlug}/filename) sowie die
|
||||||
|
// Anzahl geschriebener Bytes zurück.
|
||||||
|
//
|
||||||
|
// V1: Gemeinsame Upload-Logik — ersetzt 3× duplizierte Implementierung.
|
||||||
|
// N6: Tenant-spezifisches Verzeichnis statt gemeinsamer Ablage.
|
||||||
|
func SaveUploadedFile(file io.Reader, originalFilename, title, uploadDir, tenantSlug string) (storagePath string, size int64, err error) {
|
||||||
|
safeSlug := sanitize(tenantSlug)
|
||||||
|
if safeSlug == "" {
|
||||||
|
safeSlug = "default"
|
||||||
|
}
|
||||||
|
tenantDir := filepath.Join(uploadDir, safeSlug)
|
||||||
|
if mkErr := os.MkdirAll(tenantDir, 0755); mkErr != nil {
|
||||||
|
return "", 0, fmt.Errorf("fileutil: mkdir %s: %w", tenantDir, mkErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
ext := filepath.Ext(originalFilename)
|
||||||
|
safeTitle := sanitize(title)
|
||||||
|
if safeTitle == "" {
|
||||||
|
safeTitle = "file"
|
||||||
|
}
|
||||||
|
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), safeTitle, ext)
|
||||||
|
destPath := filepath.Join(tenantDir, filename)
|
||||||
|
|
||||||
|
dest, createErr := os.Create(destPath)
|
||||||
|
if createErr != nil {
|
||||||
|
return "", 0, fmt.Errorf("fileutil: create %s: %w", destPath, createErr)
|
||||||
|
}
|
||||||
|
defer dest.Close()
|
||||||
|
|
||||||
|
n, copyErr := io.Copy(dest, file)
|
||||||
|
if copyErr != nil {
|
||||||
|
os.Remove(destPath) //nolint:errcheck
|
||||||
|
return "", 0, fmt.Errorf("fileutil: write %s: %w", destPath, copyErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
return "/uploads/" + safeSlug + "/" + filename, n, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// sanitize konvertiert einen String in einen sicheren Dateinamen-Bestandteil
|
||||||
|
// (nur a-z, A-Z, 0-9, -, _; maximal 40 Zeichen).
|
||||||
|
func sanitize(s string) string {
|
||||||
|
var b strings.Builder
|
||||||
|
for _, r := range s {
|
||||||
|
if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || r == '-' || r == '_' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
} else {
|
||||||
|
b.WriteRune('_')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out := b.String()
|
||||||
|
if len(out) > 40 {
|
||||||
|
out = out[:40]
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
98
server/backend/internal/httpapi/csrf.go
Normal file
98
server/backend/internal/httpapi/csrf.go
Normal file
|
|
@ -0,0 +1,98 @@
|
||||||
|
package httpapi
|
||||||
|
|
||||||
|
// csrf.go — Double-Submit-Cookie CSRF-Schutz (K1) und neuteredFileSystem (N5).
|
||||||
|
//
|
||||||
|
// Jede sichere State-ändernde Anfrage (POST/PUT/PATCH/DELETE) muss:
|
||||||
|
// 1. Den Cookie „morz_csrf" enthalten.
|
||||||
|
// 2. Den gleichen Wert als Form-Feld „csrf_token" oder Header „X-CSRF-Token" mitsenden.
|
||||||
|
//
|
||||||
|
// Token-Erzeugung: beim Rendern der Login-/Manage-Seiten wird SetCSRFCookie aufgerufen.
|
||||||
|
// Token-Validierung: CSRFProtect-Middleware prüft, ob Cookie und Payload übereinstimmen.
|
||||||
|
//
|
||||||
|
// SameSite=Lax schützt bereits gegen die meisten CSRF-Angriffe aus anderen Domains,
|
||||||
|
// aber das Double-Submit-Pattern bietet zusätzlichen Schutz für Formulare die per GET
|
||||||
|
// auf anderen Seiten eingebettet werden könnten.
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/hex"
|
||||||
|
"net/http"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
csrfCookieName = "morz_csrf"
|
||||||
|
csrfFieldName = "csrf_token"
|
||||||
|
csrfHeaderName = "X-CSRF-Token"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GenerateCSRFToken erzeugt ein 32-Byte zufälliges Hex-Token.
|
||||||
|
func GenerateCSRFToken() (string, error) {
|
||||||
|
buf := make([]byte, 32)
|
||||||
|
if _, err := rand.Read(buf); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return hex.EncodeToString(buf), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetCSRFCookie setzt (oder erneuert) den CSRF-Cookie im Response.
|
||||||
|
// Gibt das Token zurück, damit es in Template-Daten eingebettet werden kann.
|
||||||
|
func SetCSRFCookie(w http.ResponseWriter, r *http.Request, devMode bool) string {
|
||||||
|
// Existierendes Token wiederverwenden, wenn vorhanden.
|
||||||
|
if c, err := r.Cookie(csrfCookieName); err == nil && c.Value != "" {
|
||||||
|
return c.Value
|
||||||
|
}
|
||||||
|
token, err := GenerateCSRFToken()
|
||||||
|
if err != nil {
|
||||||
|
// Im Fehlerfall leeres Token — Handler müssen diesen Fall prüfen.
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
http.SetCookie(w, &http.Cookie{
|
||||||
|
Name: csrfCookieName,
|
||||||
|
Value: token,
|
||||||
|
Path: "/",
|
||||||
|
HttpOnly: false, // JavaScript darf nicht lesen, aber das ist ein Cookie-read, kein DOM-access
|
||||||
|
Secure: !devMode,
|
||||||
|
SameSite: http.SameSiteLaxMode,
|
||||||
|
MaxAge: int((8 * 3600)), // 8h — entspricht sessionTTL
|
||||||
|
})
|
||||||
|
return token
|
||||||
|
}
|
||||||
|
|
||||||
|
// CSRFTokenFromRequest liest das CSRF-Token aus Form-Feld oder Header.
|
||||||
|
func CSRFTokenFromRequest(r *http.Request) string {
|
||||||
|
// Header hat Vorrang (API-Clients).
|
||||||
|
if h := r.Header.Get(csrfHeaderName); h != "" {
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
// Form-Feld (HTML-Formulare).
|
||||||
|
return r.FormValue(csrfFieldName)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CSRFProtect ist Middleware für POST/PUT/PATCH/DELETE-Requests.
|
||||||
|
// Sie prüft, ob das CSRF-Token im Request mit dem Cookie übereinstimmt.
|
||||||
|
// GET-/HEAD-/OPTIONS-Requests werden durchgelassen.
|
||||||
|
func CSRFProtect(devMode bool) func(http.Handler) http.Handler {
|
||||||
|
return func(next http.Handler) http.Handler {
|
||||||
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
switch r.Method {
|
||||||
|
case http.MethodGet, http.MethodHead, http.MethodOptions, http.MethodTrace:
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
cookie, err := r.Cookie(csrfCookieName)
|
||||||
|
if err != nil || cookie.Value == "" {
|
||||||
|
http.Error(w, "CSRF-Token fehlt (Cookie)", http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
token := CSRFTokenFromRequest(r)
|
||||||
|
if token == "" || token != cookie.Value {
|
||||||
|
http.Error(w, "Ungültiger CSRF-Token", http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -8,25 +8,27 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/config"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/config"
|
||||||
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||||
"golang.org/x/crypto/bcrypt"
|
"golang.org/x/crypto/bcrypt"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const sessionTTL = 8 * time.Hour
|
||||||
sessionCookieName = "morz_session"
|
|
||||||
sessionTTL = 8 * time.Hour
|
// sessionCookieName ist ein Alias auf die zentrale Konstante (V5).
|
||||||
)
|
const sessionCookieName = reqcontext.SessionCookieName
|
||||||
|
|
||||||
// loginData is the template data for the login page.
|
// loginData is the template data for the login page.
|
||||||
type loginData struct {
|
type loginData struct {
|
||||||
Error string
|
Error string
|
||||||
Next string
|
Next string
|
||||||
|
CSRFToken string
|
||||||
}
|
}
|
||||||
|
|
||||||
// HandleLoginUI renders the login form (GET /login).
|
// HandleLoginUI renders the login form (GET /login).
|
||||||
// If a valid session cookie is already present, the user is redirected to /admin
|
// If a valid session cookie is already present, the user is redirected to /admin
|
||||||
// (or the tenant dashboard once tenants are wired up in Phase 3).
|
// (or the tenant dashboard once tenants are wired up in Phase 3).
|
||||||
func HandleLoginUI(authStore *store.AuthStore) http.HandlerFunc {
|
func HandleLoginUI(authStore *store.AuthStore, cfg config.Config) http.HandlerFunc {
|
||||||
tmpl := template.Must(template.New("login").Parse(loginTmpl))
|
tmpl := template.Must(template.New("login").Parse(loginTmpl))
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
// Redirect if already logged in.
|
// Redirect if already logged in.
|
||||||
|
|
@ -43,8 +45,11 @@ func HandleLoginUI(authStore *store.AuthStore) http.HandlerFunc {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// K1: CSRF-Token für das Login-Formular setzen/erneuern.
|
||||||
|
csrfToken := setCSRFCookie(w, r, cfg.DevMode)
|
||||||
|
|
||||||
next := r.URL.Query().Get("next")
|
next := r.URL.Query().Get("next")
|
||||||
data := loginData{Next: sanitizeNext(next)}
|
data := loginData{Next: sanitizeNext(next), CSRFToken: csrfToken}
|
||||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||||
_ = tmpl.Execute(w, data)
|
_ = tmpl.Execute(w, data)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
44
server/backend/internal/httpapi/manage/csrf_helpers.go
Normal file
44
server/backend/internal/httpapi/manage/csrf_helpers.go
Normal file
|
|
@ -0,0 +1,44 @@
|
||||||
|
package manage
|
||||||
|
|
||||||
|
// csrf_helpers.go — Hilfsfunktionen für CSRF im manage-Package (K1).
|
||||||
|
//
|
||||||
|
// Das manage-Package darf httpapi nicht importieren (würde einen Import-Cycle erzeugen).
|
||||||
|
// Deshalb sind die minimalen CSRF-Hilfsfunktionen hier dupliziert.
|
||||||
|
// Die eigentliche CSRF-Middleware lebt in httpapi/csrf.go.
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/hex"
|
||||||
|
"net/http"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
csrfCookieName = "morz_csrf"
|
||||||
|
// CSRFFieldName ist der Name des versteckten Form-Felds mit dem CSRF-Token.
|
||||||
|
// Wird in Templates als {{.CSRFToken}} eingebettet.
|
||||||
|
CSRFFieldName = "csrf_token"
|
||||||
|
)
|
||||||
|
|
||||||
|
// setCSRFCookie setzt (oder erneuert) den CSRF-Cookie und gibt das Token zurück.
|
||||||
|
// Wird von Handlern aufgerufen, die GET-Seiten mit Formularen rendern.
|
||||||
|
func setCSRFCookie(w http.ResponseWriter, r *http.Request, devMode bool) string {
|
||||||
|
// Existierendes Token wiederverwenden.
|
||||||
|
if c, err := r.Cookie(csrfCookieName); err == nil && c.Value != "" {
|
||||||
|
return c.Value
|
||||||
|
}
|
||||||
|
buf := make([]byte, 32)
|
||||||
|
if _, err := rand.Read(buf); err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
token := hex.EncodeToString(buf)
|
||||||
|
http.SetCookie(w, &http.Cookie{
|
||||||
|
Name: csrfCookieName,
|
||||||
|
Value: token,
|
||||||
|
Path: "/",
|
||||||
|
HttpOnly: false, // muss von JS nicht gelesen werden; Formulare nutzen das versteckte Feld
|
||||||
|
Secure: !devMode,
|
||||||
|
SameSite: http.SameSiteLaxMode,
|
||||||
|
MaxAge: 8 * 3600, // 8h
|
||||||
|
})
|
||||||
|
return token
|
||||||
|
}
|
||||||
|
|
@ -2,14 +2,13 @@ package manage
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/fileutil"
|
||||||
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -46,6 +45,8 @@ func HandleUploadMedia(tenants *store.TenantStore, media *store.MediaStore, uplo
|
||||||
}
|
}
|
||||||
tenantID := tenant.ID
|
tenantID := tenant.ID
|
||||||
|
|
||||||
|
// W3: MaxBytesReader begrenzt den gesamten Request-Body auf maxUploadSize.
|
||||||
|
r.Body = http.MaxBytesReader(w, r.Body, maxUploadSize)
|
||||||
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
||||||
http.Error(w, "request too large or not multipart", http.StatusBadRequest)
|
http.Error(w, "request too large or not multipart", http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
|
|
@ -90,31 +91,15 @@ func HandleUploadMedia(tenants *store.TenantStore, media *store.MediaStore, uplo
|
||||||
title = strings.TrimSuffix(header.Filename, filepath.Ext(header.Filename))
|
title = strings.TrimSuffix(header.Filename, filepath.Ext(header.Filename))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Generate unique storage path.
|
// V1+N6: Gemeinsame Upload-Funktion, tenant-spezifisches Verzeichnis.
|
||||||
ext := filepath.Ext(header.Filename)
|
storagePath, size, err := fileutil.SaveUploadedFile(file, header.Filename, title, uploadDir, r.PathValue("tenantSlug"))
|
||||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), sanitize(title), ext)
|
|
||||||
destPath := filepath.Join(uploadDir, filename)
|
|
||||||
|
|
||||||
dest, err := os.Create(destPath)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "storage error", http.StatusInternalServerError)
|
http.Error(w, "storage error", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
defer dest.Close()
|
|
||||||
|
|
||||||
size, err := io.Copy(dest, file)
|
|
||||||
if err != nil {
|
|
||||||
os.Remove(destPath) //nolint:errcheck
|
|
||||||
http.Error(w, "write error", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Storage path relative (served via /uploads/).
|
|
||||||
storagePath := "/uploads/" + filename
|
|
||||||
|
|
||||||
asset, err := media.Create(r.Context(), tenantID, title, assetType, storagePath, "", mimeType, size)
|
asset, err := media.Create(r.Context(), tenantID, title, assetType, storagePath, "", mimeType, size)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
os.Remove(destPath) //nolint:errcheck
|
|
||||||
http.Error(w, "db error", http.StatusInternalServerError)
|
http.Error(w, "db error", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
@ -138,6 +123,17 @@ func HandleDeleteMedia(media *store.MediaStore, uploadDir string) http.HandlerFu
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// K3: Tenant-Check — nur der eigene Tenant oder Admin darf löschen.
|
||||||
|
u := reqcontext.UserFromContext(r.Context())
|
||||||
|
if u == nil {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if u.Role != "admin" && u.TenantID != asset.TenantID {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// Delete physical file if it's a local upload.
|
// Delete physical file if it's a local upload.
|
||||||
if asset.StoragePath != "" {
|
if asset.StoragePath != "" {
|
||||||
filename := filepath.Base(asset.StoragePath)
|
filename := filepath.Base(asset.StoragePath)
|
||||||
|
|
|
||||||
|
|
@ -9,9 +9,28 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/mqttnotifier"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/mqttnotifier"
|
||||||
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// requirePlaylistAccess prüft ob der eingeloggte User zur Playlist-Tenant gehört.
|
||||||
|
// Gibt true zurück wenn Zugriff erlaubt; schreibt 403 und gibt false zurück wenn nicht.
|
||||||
|
func requirePlaylistAccess(w http.ResponseWriter, r *http.Request, playlist *store.Playlist) bool {
|
||||||
|
u := reqcontext.UserFromContext(r.Context())
|
||||||
|
if u == nil {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if u.Role == "admin" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if u.TenantID != playlist.TenantID {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
// HandleGetPlaylist returns the playlist and its items for a screen.
|
// HandleGetPlaylist returns the playlist and its items for a screen.
|
||||||
func HandleGetPlaylist(screens *store.ScreenStore, playlists *store.PlaylistStore) http.HandlerFunc {
|
func HandleGetPlaylist(screens *store.ScreenStore, playlists *store.PlaylistStore) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
|
@ -48,6 +67,16 @@ func HandleAddItem(playlists *store.PlaylistStore, media *store.MediaStore, noti
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
playlistID := r.PathValue("playlistId")
|
playlistID := r.PathValue("playlistId")
|
||||||
|
|
||||||
|
// K4: Tenant-Check.
|
||||||
|
playlist, err := playlists.Get(r.Context(), playlistID)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "playlist not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requirePlaylistAccess(w, r, playlist) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
var body struct {
|
var body struct {
|
||||||
MediaAssetID string `json:"media_asset_id"`
|
MediaAssetID string `json:"media_asset_id"`
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
|
|
@ -114,6 +143,16 @@ func HandleUpdateItem(playlists *store.PlaylistStore, notifier *mqttnotifier.Not
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
id := r.PathValue("itemId")
|
id := r.PathValue("itemId")
|
||||||
|
|
||||||
|
// K4: Tenant-Check via Playlist des Items.
|
||||||
|
playlist, err := playlists.GetByItemID(r.Context(), id)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "item not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requirePlaylistAccess(w, r, playlist) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
var body struct {
|
var body struct {
|
||||||
Title string `json:"title"`
|
Title string `json:"title"`
|
||||||
DurationSeconds int `json:"duration_seconds"`
|
DurationSeconds int `json:"duration_seconds"`
|
||||||
|
|
@ -155,6 +194,16 @@ func HandleDeleteItem(playlists *store.PlaylistStore, notifier *mqttnotifier.Not
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
id := r.PathValue("itemId")
|
id := r.PathValue("itemId")
|
||||||
|
|
||||||
|
// K4: Tenant-Check via Playlist des Items.
|
||||||
|
playlist, err := playlists.GetByItemID(r.Context(), id)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "item not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requirePlaylistAccess(w, r, playlist) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// Resolve slug before delete (item won't exist after).
|
// Resolve slug before delete (item won't exist after).
|
||||||
slug, _ := playlists.ScreenSlugByItemID(r.Context(), id)
|
slug, _ := playlists.ScreenSlugByItemID(r.Context(), id)
|
||||||
|
|
||||||
|
|
@ -176,6 +225,16 @@ func HandleReorder(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifi
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
playlistID := r.PathValue("playlistId")
|
playlistID := r.PathValue("playlistId")
|
||||||
|
|
||||||
|
// K4: Tenant-Check.
|
||||||
|
playlist, err := playlists.Get(r.Context(), playlistID)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "playlist not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requirePlaylistAccess(w, r, playlist) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
var ids []string
|
var ids []string
|
||||||
if err := json.NewDecoder(r.Body).Decode(&ids); err != nil {
|
if err := json.NewDecoder(r.Body).Decode(&ids); err != nil {
|
||||||
http.Error(w, "body must be JSON array of item IDs", http.StatusBadRequest)
|
http.Error(w, "body must be JSON array of item IDs", http.StatusBadRequest)
|
||||||
|
|
@ -199,6 +258,17 @@ func HandleReorder(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifi
|
||||||
func HandleUpdatePlaylistDuration(playlists *store.PlaylistStore) http.HandlerFunc {
|
func HandleUpdatePlaylistDuration(playlists *store.PlaylistStore) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
id := r.PathValue("playlistId")
|
id := r.PathValue("playlistId")
|
||||||
|
|
||||||
|
// K4: Tenant-Check.
|
||||||
|
playlist, err := playlists.Get(r.Context(), id)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "playlist not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requirePlaylistAccess(w, r, playlist) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
secs, err := strconv.Atoi(strings.TrimSpace(r.FormValue("default_duration_seconds")))
|
secs, err := strconv.Atoi(strings.TrimSpace(r.FormValue("default_duration_seconds")))
|
||||||
if err != nil || secs <= 0 {
|
if err != nil || secs <= 0 {
|
||||||
http.Error(w, "invalid duration", http.StatusBadRequest)
|
http.Error(w, "invalid duration", http.StatusBadRequest)
|
||||||
|
|
@ -294,7 +364,7 @@ func HandleCreateScreen(tenants *store.TenantStore, screens *store.ScreenStore)
|
||||||
|
|
||||||
screen, err := screens.Create(r.Context(), tenant.ID, body.Slug, body.Name, body.Orientation)
|
screen, err := screens.Create(r.Context(), tenant.ID, body.Slug, body.Name, body.Orientation)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "db error: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "db error", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,8 +15,20 @@ import (
|
||||||
//
|
//
|
||||||
// POST /api/v1/screens/register
|
// POST /api/v1/screens/register
|
||||||
// Body: {"slug":"info10","name":"Info10 Bildschirm","orientation":"landscape"}
|
// Body: {"slug":"info10","name":"Info10 Bildschirm","orientation":"landscape"}
|
||||||
|
//
|
||||||
|
// K6: Wenn MORZ_INFOBOARD_REGISTER_SECRET gesetzt ist, muss der Aufrufer
|
||||||
|
// den Header X-Register-Secret: <secret> mitschicken. Ohne gültiges Secret
|
||||||
|
// antwortet der Endpoint mit 403 Forbidden.
|
||||||
func HandleRegisterScreen(tenants *store.TenantStore, screens *store.ScreenStore, cfg config.Config) http.HandlerFunc {
|
func HandleRegisterScreen(tenants *store.TenantStore, screens *store.ScreenStore, cfg config.Config) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
// K6: Secret-Prüfung, wenn konfiguriert.
|
||||||
|
if cfg.RegisterSecret != "" {
|
||||||
|
if r.Header.Get("X-Register-Secret") != cfg.RegisterSecret {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var body struct {
|
var body struct {
|
||||||
Slug string `json:"slug"`
|
Slug string `json:"slug"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
|
|
@ -49,7 +61,7 @@ func HandleRegisterScreen(tenants *store.TenantStore, screens *store.ScreenStore
|
||||||
|
|
||||||
screen, err := screens.Upsert(r.Context(), tenant.ID, body.Slug, body.Name, body.Orientation)
|
screen, err := screens.Upsert(r.Context(), tenant.ID, body.Slug, body.Name, body.Orientation)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "db error: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "db error", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -33,6 +33,7 @@ const loginTmpl = `<!DOCTYPE html>
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|
||||||
<form method="POST" action="/login">
|
<form method="POST" action="/login">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{.CSRFToken}}">
|
||||||
{{if .Next}}
|
{{if .Next}}
|
||||||
<input type="hidden" name="next" value="{{.Next}}">
|
<input type="hidden" name="next" value="{{.Next}}">
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|
@ -503,6 +504,31 @@ document.addEventListener('keydown', function(e) {
|
||||||
.catch(function() {});
|
.catch(function() {});
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
|
<script>
|
||||||
|
// K1: CSRF Double-Submit — füge Token aus Cookie in alle POST-Formulare ein.
|
||||||
|
(function() {
|
||||||
|
function getCookie(name) {
|
||||||
|
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||||
|
return m ? decodeURIComponent(m[1]) : '';
|
||||||
|
}
|
||||||
|
function injectCSRF() {
|
||||||
|
var token = getCookie('morz_csrf');
|
||||||
|
if (!token) return;
|
||||||
|
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||||
|
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||||
|
var inp = document.createElement('input');
|
||||||
|
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||||
|
f.appendChild(inp);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (document.readyState === 'loading') {
|
||||||
|
document.addEventListener('DOMContentLoaded', injectCSRF);
|
||||||
|
} else {
|
||||||
|
injectCSRF();
|
||||||
|
}
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
</body>
|
</body>
|
||||||
</html>`
|
</html>`
|
||||||
|
|
||||||
|
|
@ -969,6 +995,31 @@ function startUpload() {
|
||||||
xhr.send(formData);
|
xhr.send(formData);
|
||||||
}
|
}
|
||||||
</script>
|
</script>
|
||||||
|
<script>
|
||||||
|
// K1: CSRF Double-Submit — füge Token aus Cookie in alle POST-Formulare ein.
|
||||||
|
(function() {
|
||||||
|
function getCookie(name) {
|
||||||
|
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||||
|
return m ? decodeURIComponent(m[1]) : '';
|
||||||
|
}
|
||||||
|
function injectCSRF() {
|
||||||
|
var token = getCookie('morz_csrf');
|
||||||
|
if (!token) return;
|
||||||
|
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||||
|
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||||
|
var inp = document.createElement('input');
|
||||||
|
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||||
|
f.appendChild(inp);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (document.readyState === 'loading') {
|
||||||
|
document.addEventListener('DOMContentLoaded', injectCSRF);
|
||||||
|
} else {
|
||||||
|
injectCSRF();
|
||||||
|
}
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>`
|
</html>`
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,9 @@
|
||||||
package manage
|
package manage
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
|
||||||
"html/template"
|
"html/template"
|
||||||
"io"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
@ -12,11 +11,52 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/fileutil"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/mqttnotifier"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/mqttnotifier"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/reqcontext"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// renderTemplate rendert t mit data in einen Buffer und schreibt das Ergebnis erst
|
||||||
|
// dann in w, wenn kein Fehler aufgetreten ist. W7: Verhindert halb-gerendertes HTML.
|
||||||
|
func renderTemplate(w http.ResponseWriter, t *template.Template, data any) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := t.Execute(&buf, data); err != nil {
|
||||||
|
http.Error(w, "Interner Fehler (Template)", http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||||
|
buf.WriteTo(w) //nolint:errcheck
|
||||||
|
}
|
||||||
|
|
||||||
|
// requireScreenAccess prüft, ob der eingeloggte User Zugriff auf den Screen hat.
|
||||||
|
// Admins dürfen alles. Tenant-User dürfen nur Screens ihres eigenen Tenants bearbeiten.
|
||||||
|
// Gibt true zurück wenn Zugriff erlaubt ist; schreibt 403 und gibt false zurück wenn nicht.
|
||||||
|
func requireScreenAccess(w http.ResponseWriter, r *http.Request, screen *store.Screen) bool {
|
||||||
|
u := reqcontext.UserFromContext(r.Context())
|
||||||
|
if u == nil {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if u.Role == "admin" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
// Tenant-User: Screen muss zum eigenen Tenant gehören.
|
||||||
|
// Wir vergleichen über TenantSlug→TenantID, aber der Screen hat TenantID.
|
||||||
|
// Da uns der Tenant-Slug des Users bekannt ist und wir keinen TenantStore
|
||||||
|
// hier haben, vergleichen wir TenantID des Screens mit dem user.TenantID-Feld.
|
||||||
|
// store.User hat TenantSlug aber nicht TenantID — deswegen muss der
|
||||||
|
// aufrufende Handler nach GetBySlug bereits die TenantID des Screens bekannt haben.
|
||||||
|
// Wir nutzen u.TenantSlug und vertrauen darauf dass der Screen bereits geladen ist.
|
||||||
|
// Den eigentlichen Vergleich machen wir via TenantID des Screens vs.
|
||||||
|
// dem TenantID-Feld im User (das über reqcontext gespeichert ist).
|
||||||
|
if u.TenantID != "" && u.TenantID != screen.TenantID {
|
||||||
|
http.Error(w, "Forbidden", http.StatusForbidden)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
var tmplFuncs = template.FuncMap{
|
var tmplFuncs = template.FuncMap{
|
||||||
"typeIcon": func(t string) string {
|
"typeIcon": func(t string) string {
|
||||||
switch t {
|
switch t {
|
||||||
|
|
@ -66,8 +106,7 @@ func HandleAdminUI(tenants *store.TenantStore, screens *store.ScreenStore) http.
|
||||||
http.Error(w, "db error", http.StatusInternalServerError)
|
http.Error(w, "db error", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
renderTemplate(w, t, map[string]any{
|
||||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
|
||||||
"Screens": allScreens,
|
"Screens": allScreens,
|
||||||
"Tenants": allTenants,
|
"Tenants": allTenants,
|
||||||
})
|
})
|
||||||
|
|
@ -91,6 +130,11 @@ func HandleManageUI(
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// K2: Tenant-Isolation — nur eigener Tenant oder Admin.
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
var tenant *store.Tenant
|
var tenant *store.Tenant
|
||||||
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
||||||
tenant, _ = tenants.Get(r.Context(), u.TenantSlug)
|
tenant, _ = tenants.Get(r.Context(), u.TenantSlug)
|
||||||
|
|
@ -139,8 +183,7 @@ func HandleManageUI(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
renderTemplate(w, t, map[string]any{
|
||||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
|
||||||
"Screen": screen,
|
"Screen": screen,
|
||||||
"Tenant": tenant,
|
"Tenant": tenant,
|
||||||
"Playlist": playlist,
|
"Playlist": playlist,
|
||||||
|
|
@ -183,7 +226,7 @@ func HandleCreateScreenUI(tenants *store.TenantStore, screens *store.ScreenStore
|
||||||
|
|
||||||
_, err = screens.Create(r.Context(), tenant.ID, slug, name, orientation)
|
_, err = screens.Create(r.Context(), tenant.ID, slug, name, orientation)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Fehler: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Interner Fehler", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
http.Redirect(w, r, "/admin?msg=added", http.StatusSeeOther)
|
http.Redirect(w, r, "/admin?msg=added", http.StatusSeeOther)
|
||||||
|
|
@ -230,12 +273,11 @@ func HandleProvisionUI(tenants *store.TenantStore, screens *store.ScreenStore) h
|
||||||
|
|
||||||
screen, err := screens.Upsert(r.Context(), tenant.ID, slug, name, orientation)
|
screen, err := screens.Upsert(r.Context(), tenant.ID, slug, name, orientation)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "DB-Fehler: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Interner Fehler", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
renderTemplate(w, t, map[string]any{
|
||||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
|
||||||
"Screen": screen,
|
"Screen": screen,
|
||||||
"IP": ip,
|
"IP": ip,
|
||||||
"SSHUser": sshUser,
|
"SSHUser": sshUser,
|
||||||
|
|
@ -267,6 +309,14 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// K2: Tenant-Isolation.
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// W3: MaxBytesReader begrenzt Uploads auf maxUploadSize.
|
||||||
|
r.Body = http.MaxBytesReader(w, r.Body, maxUploadSize)
|
||||||
|
|
||||||
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
||||||
http.Error(w, "Upload zu groß oder ungültig", http.StatusBadRequest)
|
http.Error(w, "Upload zu groß oder ungültig", http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
|
|
@ -275,6 +325,15 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
||||||
assetType := strings.TrimSpace(r.FormValue("type"))
|
assetType := strings.TrimSpace(r.FormValue("type"))
|
||||||
title := strings.TrimSpace(r.FormValue("title"))
|
title := strings.TrimSpace(r.FormValue("title"))
|
||||||
|
|
||||||
|
// Bestimme tenantSlug für N6 (tenant-spezifisches Upload-Verzeichnis).
|
||||||
|
tenantSlug := ""
|
||||||
|
if u := reqcontext.UserFromContext(r.Context()); u != nil && u.TenantSlug != "" {
|
||||||
|
tenantSlug = u.TenantSlug
|
||||||
|
}
|
||||||
|
if tenantSlug == "" {
|
||||||
|
tenantSlug = "default"
|
||||||
|
}
|
||||||
|
|
||||||
switch assetType {
|
switch assetType {
|
||||||
case "web":
|
case "web":
|
||||||
url := strings.TrimSpace(r.FormValue("url"))
|
url := strings.TrimSpace(r.FormValue("url"))
|
||||||
|
|
@ -297,17 +356,12 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
||||||
title = strings.TrimSuffix(header.Filename, filepath.Ext(header.Filename))
|
title = strings.TrimSuffix(header.Filename, filepath.Ext(header.Filename))
|
||||||
}
|
}
|
||||||
mimeType := header.Header.Get("Content-Type")
|
mimeType := header.Header.Get("Content-Type")
|
||||||
ext := filepath.Ext(header.Filename)
|
// V1+N6: Gemeinsame Upload-Funktion, tenant-spezifisches Verzeichnis.
|
||||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), sanitize(title), ext)
|
storagePath, size, ferr := fileutil.SaveUploadedFile(file, header.Filename, title, uploadDir, tenantSlug)
|
||||||
destPath := filepath.Join(uploadDir, filename)
|
|
||||||
dest, ferr := os.Create(destPath)
|
|
||||||
if ferr != nil {
|
if ferr != nil {
|
||||||
http.Error(w, "Speicherfehler", http.StatusInternalServerError)
|
http.Error(w, "Speicherfehler", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
defer dest.Close()
|
|
||||||
size, _ := io.Copy(dest, file)
|
|
||||||
storagePath := "/uploads/" + filename
|
|
||||||
_, err = media.Create(r.Context(), screen.TenantID, title, assetType, storagePath, "", mimeType, size)
|
_, err = media.Create(r.Context(), screen.TenantID, title, assetType, storagePath, "", mimeType, size)
|
||||||
default:
|
default:
|
||||||
http.Error(w, "Unbekannter Typ", http.StatusBadRequest)
|
http.Error(w, "Unbekannter Typ", http.StatusBadRequest)
|
||||||
|
|
@ -315,7 +369,7 @@ func HandleUploadMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "DB-Fehler: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "DB-Fehler", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
http.Redirect(w, r, "/manage/"+screenSlug+"?msg=uploaded", http.StatusSeeOther)
|
http.Redirect(w, r, "/manage/"+screenSlug+"?msg=uploaded", http.StatusSeeOther)
|
||||||
|
|
@ -337,6 +391,11 @@ func HandleAddItemUI(playlists *store.PlaylistStore, media *store.MediaStore, sc
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// K2: Tenant-Isolation.
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
playlist, err := playlists.GetOrCreateForScreen(r.Context(), screen.TenantID, screen.ID, screen.Name)
|
playlist, err := playlists.GetOrCreateForScreen(r.Context(), screen.TenantID, screen.ID, screen.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "db error", http.StatusInternalServerError)
|
http.Error(w, "db error", http.StatusInternalServerError)
|
||||||
|
|
@ -388,10 +447,21 @@ func HandleAddItemUI(playlists *store.PlaylistStore, media *store.MediaStore, sc
|
||||||
}
|
}
|
||||||
|
|
||||||
// HandleDeleteItemUI removes a playlist item and redirects back.
|
// HandleDeleteItemUI removes a playlist item and redirects back.
|
||||||
func HandleDeleteItemUI(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
func HandleDeleteItemUI(playlists *store.PlaylistStore, screens *store.ScreenStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
screenSlug := r.PathValue("screenSlug")
|
screenSlug := r.PathValue("screenSlug")
|
||||||
itemID := r.PathValue("itemId")
|
itemID := r.PathValue("itemId")
|
||||||
|
|
||||||
|
// K2: Tenant-Isolation.
|
||||||
|
screen, err := screens.GetBySlug(r.Context(), screenSlug)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
if err := playlists.DeleteItem(r.Context(), itemID); err != nil {
|
if err := playlists.DeleteItem(r.Context(), itemID); err != nil {
|
||||||
http.Error(w, "db error", http.StatusInternalServerError)
|
http.Error(w, "db error", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
|
|
@ -410,6 +480,10 @@ func HandleReorderUI(playlists *store.PlaylistStore, screens *store.ScreenStore,
|
||||||
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
// K2: Tenant-Isolation.
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
playlist, err := playlists.GetByScreen(r.Context(), screen.ID)
|
playlist, err := playlists.GetByScreen(r.Context(), screen.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "playlist nicht gefunden", http.StatusNotFound)
|
http.Error(w, "playlist nicht gefunden", http.StatusNotFound)
|
||||||
|
|
@ -430,10 +504,21 @@ func HandleReorderUI(playlists *store.PlaylistStore, screens *store.ScreenStore,
|
||||||
}
|
}
|
||||||
|
|
||||||
// HandleUpdateItemUI handles form PATCH/POST to update a single item.
|
// HandleUpdateItemUI handles form PATCH/POST to update a single item.
|
||||||
func HandleUpdateItemUI(playlists *store.PlaylistStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
func HandleUpdateItemUI(playlists *store.PlaylistStore, screens *store.ScreenStore, notifier *mqttnotifier.Notifier) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
screenSlug := r.PathValue("screenSlug")
|
screenSlug := r.PathValue("screenSlug")
|
||||||
itemID := r.PathValue("itemId")
|
itemID := r.PathValue("itemId")
|
||||||
|
|
||||||
|
// K2: Tenant-Isolation.
|
||||||
|
screen, err := screens.GetBySlug(r.Context(), screenSlug)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
if err := r.ParseForm(); err != nil {
|
if err := r.ParseForm(); err != nil {
|
||||||
http.Error(w, "bad form", http.StatusBadRequest)
|
http.Error(w, "bad form", http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
|
|
@ -462,6 +547,16 @@ func HandleDeleteMediaUI(media *store.MediaStore, screens *store.ScreenStore, up
|
||||||
screenSlug := r.PathValue("screenSlug")
|
screenSlug := r.PathValue("screenSlug")
|
||||||
mediaID := r.PathValue("mediaId")
|
mediaID := r.PathValue("mediaId")
|
||||||
|
|
||||||
|
// K2: Tenant-Isolation.
|
||||||
|
screen, err := screens.GetBySlug(r.Context(), screenSlug)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "screen nicht gefunden", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !requireScreenAccess(w, r, screen) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
asset, err := media.Get(r.Context(), mediaID)
|
asset, err := media.Get(r.Context(), mediaID)
|
||||||
if err == nil && asset.StoragePath != "" {
|
if err == nil && asset.StoragePath != "" {
|
||||||
os.Remove(filepath.Join(uploadDir, filepath.Base(asset.StoragePath))) //nolint:errcheck
|
os.Remove(filepath.Join(uploadDir, filepath.Base(asset.StoragePath))) //nolint:errcheck
|
||||||
|
|
|
||||||
|
|
@ -23,7 +23,7 @@ func UserFromContext(ctx context.Context) *store.User {
|
||||||
func RequireAuth(authStore *store.AuthStore) func(http.Handler) http.Handler {
|
func RequireAuth(authStore *store.AuthStore) func(http.Handler) http.Handler {
|
||||||
return func(next http.Handler) http.Handler {
|
return func(next http.Handler) http.Handler {
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
cookie, err := r.Cookie("morz_session")
|
cookie, err := r.Cookie(reqcontext.SessionCookieName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
redirectToLogin(w, r)
|
redirectToLogin(w, r)
|
||||||
return
|
return
|
||||||
|
|
|
||||||
91
server/backend/internal/httpapi/ratelimit.go
Normal file
91
server/backend/internal/httpapi/ratelimit.go
Normal file
|
|
@ -0,0 +1,91 @@
|
||||||
|
package httpapi
|
||||||
|
|
||||||
|
// ratelimit.go — Einfaches In-Memory-Rate-Limiting für POST /login (N1).
|
||||||
|
//
|
||||||
|
// Implementierung: Sliding-Window-Counter pro IP-Adresse.
|
||||||
|
// Erlaubt maximal loginMaxAttempts Versuche pro loginWindow.
|
||||||
|
// Ältere Einträge werden periodisch aus der Map bereinigt.
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
loginMaxAttempts = 5
|
||||||
|
loginWindow = 1 * time.Minute
|
||||||
|
cleanupInterval = 5 * time.Minute
|
||||||
|
)
|
||||||
|
|
||||||
|
type loginAttempt struct {
|
||||||
|
count int
|
||||||
|
windowEnd time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
type loginRateLimiter struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
entries map[string]*loginAttempt
|
||||||
|
}
|
||||||
|
|
||||||
|
func newLoginRateLimiter() *loginRateLimiter {
|
||||||
|
rl := &loginRateLimiter{
|
||||||
|
entries: make(map[string]*loginAttempt),
|
||||||
|
}
|
||||||
|
go rl.cleanup()
|
||||||
|
return rl
|
||||||
|
}
|
||||||
|
|
||||||
|
// Allow returns true if the IP is within the rate limit, false if it should be blocked.
|
||||||
|
func (rl *loginRateLimiter) Allow(ip string) bool {
|
||||||
|
rl.mu.Lock()
|
||||||
|
defer rl.mu.Unlock()
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
e, ok := rl.entries[ip]
|
||||||
|
if !ok || now.After(e.windowEnd) {
|
||||||
|
// Neues Fenster.
|
||||||
|
rl.entries[ip] = &loginAttempt{count: 1, windowEnd: now.Add(loginWindow)}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
e.count++
|
||||||
|
return e.count <= loginMaxAttempts
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanup bereinigt abgelaufene Einträge periodisch.
|
||||||
|
func (rl *loginRateLimiter) cleanup() {
|
||||||
|
ticker := time.NewTicker(cleanupInterval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
for range ticker.C {
|
||||||
|
rl.mu.Lock()
|
||||||
|
now := time.Now()
|
||||||
|
for ip, e := range rl.entries {
|
||||||
|
if now.After(e.windowEnd) {
|
||||||
|
delete(rl.entries, ip)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
rl.mu.Unlock()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoginRateLimit ist eine globale Instanz des Rate-Limiters (package-level Singleton).
|
||||||
|
var LoginRateLimit = newLoginRateLimiter()
|
||||||
|
|
||||||
|
// RateLimitLogin ist Middleware, die Brute-Force-Angriffe auf den Login-Endpoint verhindert.
|
||||||
|
// Bei Überschreitung wird 429 Too Many Requests zurückgegeben.
|
||||||
|
func RateLimitLogin(next http.Handler) http.Handler {
|
||||||
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
// IP-Adresse extrahieren (berücksichtigt X-Forwarded-For nicht, um Spoofing zu vermeiden).
|
||||||
|
ip, _, err := net.SplitHostPort(r.RemoteAddr)
|
||||||
|
if err != nil {
|
||||||
|
ip = r.RemoteAddr
|
||||||
|
}
|
||||||
|
|
||||||
|
if !LoginRateLimit.Allow(ip) {
|
||||||
|
http.Error(w, "Zu viele Anmeldeversuche. Bitte warte eine Minute.", http.StatusTooManyRequests)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
@ -85,27 +85,43 @@ func registerManageRoutes(mux *http.ServeMux, d RouterDeps) {
|
||||||
notifier = mqttnotifier.New("", "", "")
|
notifier = mqttnotifier.New("", "", "")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Serve uploaded files.
|
// Serve uploaded files. N5: Directory-Listing deaktiviert via neuteredFileSystem.
|
||||||
mux.Handle("GET /uploads/", http.StripPrefix("/uploads/", http.FileServer(http.Dir(uploadDir))))
|
mux.Handle("GET /uploads/", http.StripPrefix("/uploads/", http.FileServer(neuteredFileSystem{http.Dir(uploadDir)})))
|
||||||
|
|
||||||
// Serve embedded static assets (Bulma CSS, SortableJS) — no external CDN needed.
|
// Serve embedded static assets (Bulma CSS, SortableJS) — no external CDN needed.
|
||||||
mux.HandleFunc("GET /static/bulma.min.css", manage.HandleStaticBulmaCSS())
|
mux.HandleFunc("GET /static/bulma.min.css", manage.HandleStaticBulmaCSS())
|
||||||
mux.HandleFunc("GET /static/Sortable.min.js", manage.HandleStaticSortableJS())
|
mux.HandleFunc("GET /static/Sortable.min.js", manage.HandleStaticSortableJS())
|
||||||
|
|
||||||
|
// K1: CSRF-Schutz für alle state-ändernden Routen.
|
||||||
|
csrf := CSRFProtect(d.Config.DevMode)
|
||||||
|
|
||||||
|
// K1: Setzt den CSRF-Cookie bei GET-Requests, damit das JS-Inject-Script ihn lesen kann.
|
||||||
|
setCSRF := func(h http.Handler) http.Handler {
|
||||||
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Method == http.MethodGet {
|
||||||
|
SetCSRFCookie(w, r, d.Config.DevMode)
|
||||||
|
}
|
||||||
|
h.ServeHTTP(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// ── Auth (no auth middleware required) ────────────────────────────────
|
// ── Auth (no auth middleware required) ────────────────────────────────
|
||||||
mux.HandleFunc("GET /login", manage.HandleLoginUI(d.AuthStore))
|
// K1: GET /login setzt CSRF-Cookie; POST /login und POST /logout werden per CSRF geprüft.
|
||||||
mux.HandleFunc("POST /login", manage.HandleLoginPost(d.AuthStore, d.Config))
|
mux.Handle("GET /login", http.HandlerFunc(manage.HandleLoginUI(d.AuthStore, d.Config)))
|
||||||
mux.HandleFunc("POST /logout", manage.HandleLogoutPost(d.AuthStore, d.Config))
|
// N1: Rate-Limiting auf /login (max. 5 Versuche/Minute pro IP).
|
||||||
|
mux.Handle("POST /login", RateLimitLogin(csrf(http.HandlerFunc(manage.HandleLoginPost(d.AuthStore, d.Config)))))
|
||||||
|
mux.Handle("POST /logout", csrf(http.HandlerFunc(manage.HandleLogoutPost(d.AuthStore, d.Config))))
|
||||||
|
|
||||||
// Shorthand middleware combinators for this router.
|
// Shorthand middleware combinators for this router.
|
||||||
|
// Für GET-Routen: setCSRF setzt den Cookie; für POST-Routen: csrf validiert ihn.
|
||||||
authOnly := func(h http.Handler) http.Handler {
|
authOnly := func(h http.Handler) http.Handler {
|
||||||
return chain(h, RequireAuth(d.AuthStore))
|
return chain(h, RequireAuth(d.AuthStore), setCSRF, csrf)
|
||||||
}
|
}
|
||||||
authAdmin := func(h http.Handler) http.Handler {
|
authAdmin := func(h http.Handler) http.Handler {
|
||||||
return chain(h, RequireAuth(d.AuthStore), RequireAdmin)
|
return chain(h, RequireAuth(d.AuthStore), RequireAdmin, setCSRF, csrf)
|
||||||
}
|
}
|
||||||
authTenant := func(h http.Handler) http.Handler {
|
authTenant := func(h http.Handler) http.Handler {
|
||||||
return chain(h, RequireAuth(d.AuthStore), RequireTenantAccess)
|
return chain(h, RequireAuth(d.AuthStore), RequireTenantAccess, setCSRF, csrf)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Admin UI ──────────────────────────────────────────────────────────
|
// ── Admin UI ──────────────────────────────────────────────────────────
|
||||||
|
|
@ -126,9 +142,9 @@ func registerManageRoutes(mux *http.ServeMux, d RouterDeps) {
|
||||||
mux.Handle("POST /manage/{screenSlug}/items",
|
mux.Handle("POST /manage/{screenSlug}/items",
|
||||||
authOnly(http.HandlerFunc(manage.HandleAddItemUI(d.PlaylistStore, d.MediaStore, d.ScreenStore, notifier))))
|
authOnly(http.HandlerFunc(manage.HandleAddItemUI(d.PlaylistStore, d.MediaStore, d.ScreenStore, notifier))))
|
||||||
mux.Handle("POST /manage/{screenSlug}/items/{itemId}",
|
mux.Handle("POST /manage/{screenSlug}/items/{itemId}",
|
||||||
authOnly(http.HandlerFunc(manage.HandleUpdateItemUI(d.PlaylistStore, notifier))))
|
authOnly(http.HandlerFunc(manage.HandleUpdateItemUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||||
mux.Handle("POST /manage/{screenSlug}/items/{itemId}/delete",
|
mux.Handle("POST /manage/{screenSlug}/items/{itemId}/delete",
|
||||||
authOnly(http.HandlerFunc(manage.HandleDeleteItemUI(d.PlaylistStore, notifier))))
|
authOnly(http.HandlerFunc(manage.HandleDeleteItemUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||||
mux.Handle("POST /manage/{screenSlug}/reorder",
|
mux.Handle("POST /manage/{screenSlug}/reorder",
|
||||||
authOnly(http.HandlerFunc(manage.HandleReorderUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
authOnly(http.HandlerFunc(manage.HandleReorderUI(d.PlaylistStore, d.ScreenStore, notifier))))
|
||||||
mux.Handle("POST /manage/{screenSlug}/media/{mediaId}/delete",
|
mux.Handle("POST /manage/{screenSlug}/media/{mediaId}/delete",
|
||||||
|
|
|
||||||
|
|
@ -294,6 +294,31 @@ function toggleUploadFields() {
|
||||||
setInterval(pollStatus, 30000);
|
setInterval(pollStatus, 30000);
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
|
<script>
|
||||||
|
// K1: CSRF Double-Submit — füge Token aus Cookie in alle POST-Formulare ein.
|
||||||
|
(function() {
|
||||||
|
function getCookie(name) {
|
||||||
|
var m = document.cookie.match('(?:^|; )' + name + '=([^;]*)');
|
||||||
|
return m ? decodeURIComponent(m[1]) : '';
|
||||||
|
}
|
||||||
|
function injectCSRF() {
|
||||||
|
var token = getCookie('morz_csrf');
|
||||||
|
if (!token) return;
|
||||||
|
document.querySelectorAll('form[method="POST"],form[method="post"]').forEach(function(f) {
|
||||||
|
if (!f.querySelector('input[name="csrf_token"]')) {
|
||||||
|
var inp = document.createElement('input');
|
||||||
|
inp.type = 'hidden'; inp.name = 'csrf_token'; inp.value = token;
|
||||||
|
f.appendChild(inp);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (document.readyState === 'loading') {
|
||||||
|
document.addEventListener('DOMContentLoaded', injectCSRF);
|
||||||
|
} else {
|
||||||
|
injectCSRF();
|
||||||
|
}
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>`
|
</html>`
|
||||||
|
|
|
||||||
|
|
@ -2,15 +2,15 @@
|
||||||
package tenant
|
package tenant
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
"html/template"
|
"html/template"
|
||||||
"io"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/fileutil"
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -94,13 +94,19 @@ func HandleTenantDashboard(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
// W7: Template in Buffer rendern, erst bei Erfolg an Client senden.
|
||||||
t.Execute(w, map[string]any{ //nolint:errcheck
|
var buf bytes.Buffer
|
||||||
|
if err := t.Execute(&buf, map[string]any{
|
||||||
"Tenant": tenant,
|
"Tenant": tenant,
|
||||||
"Screens": screens,
|
"Screens": screens,
|
||||||
"Assets": assets,
|
"Assets": assets,
|
||||||
"Flash": flash,
|
"Flash": flash,
|
||||||
})
|
}); err != nil {
|
||||||
|
http.Error(w, "Interner Fehler (Template)", http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||||
|
buf.WriteTo(w) //nolint:errcheck
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -120,6 +126,9 @@ func HandleTenantUpload(
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// W3: MaxBytesReader begrenzt Uploads auf maxUploadSize bevor ParseMultipartForm.
|
||||||
|
r.Body = http.MaxBytesReader(w, r.Body, maxUploadSize)
|
||||||
|
|
||||||
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
if err := r.ParseMultipartForm(maxUploadSize); err != nil {
|
||||||
http.Error(w, "Upload zu groß oder ungültig", http.StatusBadRequest)
|
http.Error(w, "Upload zu groß oder ungültig", http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
|
|
@ -156,24 +165,12 @@ func HandleTenantUpload(
|
||||||
if detected := mimeToAssetType(mimeType); detected != "" {
|
if detected := mimeToAssetType(mimeType); detected != "" {
|
||||||
assetType = detected
|
assetType = detected
|
||||||
}
|
}
|
||||||
ext := filepath.Ext(header.Filename)
|
// V1+N6: tenant-spezifisches Upload-Verzeichnis.
|
||||||
filename := fmt.Sprintf("%d_%s%s", time.Now().UnixNano(), sanitize(title), ext)
|
storagePath, size, cerr := fileutil.SaveUploadedFile(file, header.Filename, title, uploadDir, tenantSlug)
|
||||||
destPath := filepath.Join(uploadDir, filename)
|
if cerr != nil {
|
||||||
|
|
||||||
dest, ferr := os.Create(destPath)
|
|
||||||
if ferr != nil {
|
|
||||||
http.Error(w, "Speicherfehler", http.StatusInternalServerError)
|
http.Error(w, "Speicherfehler", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
defer dest.Close()
|
|
||||||
|
|
||||||
size, cerr := io.Copy(dest, file)
|
|
||||||
if cerr != nil {
|
|
||||||
os.Remove(destPath) //nolint:errcheck
|
|
||||||
http.Error(w, "Schreibfehler", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
storagePath := "/uploads/" + filename
|
|
||||||
_, err = mediaStore.Create(r.Context(), tenant.ID, title, assetType, storagePath, "", mimeType, size)
|
_, err = mediaStore.Create(r.Context(), tenant.ID, title, assetType, storagePath, "", mimeType, size)
|
||||||
|
|
||||||
default:
|
default:
|
||||||
|
|
@ -182,7 +179,7 @@ func HandleTenantUpload(
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "DB-Fehler: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Interner Fehler", http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
http.Redirect(w, r, "/tenant/"+tenantSlug+"/dashboard?tab=media&flash=uploaded", http.StatusSeeOther)
|
http.Redirect(w, r, "/tenant/"+tenantSlug+"/dashboard?tab=media&flash=uploaded", http.StatusSeeOther)
|
||||||
|
|
|
||||||
32
server/backend/internal/httpapi/uploads.go
Normal file
32
server/backend/internal/httpapi/uploads.go
Normal file
|
|
@ -0,0 +1,32 @@
|
||||||
|
package httpapi
|
||||||
|
|
||||||
|
// uploads.go — Hilfsmittel für sicheres Serving von Uploads (N5, N6).
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
// neuteredFileSystem wraps an http.FileSystem and disables directory listing (N5).
|
||||||
|
// When Open() returns a directory, it returns an error as if the file was not found.
|
||||||
|
type neuteredFileSystem struct {
|
||||||
|
fs http.FileSystem
|
||||||
|
}
|
||||||
|
|
||||||
|
func (nfs neuteredFileSystem) Open(path string) (http.File, error) {
|
||||||
|
f, err := nfs.fs.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
s, err := f.Stat()
|
||||||
|
if err != nil {
|
||||||
|
f.Close() //nolint:errcheck
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if s.IsDir() {
|
||||||
|
// Return os.ErrNotExist so http.FileServer responds with 404.
|
||||||
|
f.Close() //nolint:errcheck
|
||||||
|
return nil, os.ErrNotExist
|
||||||
|
}
|
||||||
|
return f, nil
|
||||||
|
}
|
||||||
|
|
@ -10,6 +10,11 @@ import (
|
||||||
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
"git.az-it.net/az/morz-infoboard/server/backend/internal/store"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// SessionCookieName ist der HTTP-Cookie-Name für Sitzungen.
|
||||||
|
// Er wird in middleware.go (RequireAuth) und manage/auth.go (Login/Logout)
|
||||||
|
// verwendet und hier zentral definiert, um Duplizierung zu vermeiden.
|
||||||
|
const SessionCookieName = "morz_session"
|
||||||
|
|
||||||
type contextKey int
|
type contextKey int
|
||||||
|
|
||||||
const contextKeyUser contextKey = 0
|
const contextKeyUser contextKey = 0
|
||||||
|
|
|
||||||
|
|
@ -305,6 +305,18 @@ func (s *PlaylistStore) GetByScreen(ctx context.Context, screenID string) (*Play
|
||||||
return scanPlaylist(row)
|
return scanPlaylist(row)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetByItemID returns the playlist that contains the given playlist item.
|
||||||
|
// Used for tenant-isolation checks (K4).
|
||||||
|
func (s *PlaylistStore) GetByItemID(ctx context.Context, itemID string) (*Playlist, error) {
|
||||||
|
row := s.pool.QueryRow(ctx,
|
||||||
|
`select pl.id, pl.tenant_id, pl.screen_id, pl.name, pl.is_active,
|
||||||
|
pl.default_duration_seconds, pl.created_at, pl.updated_at
|
||||||
|
from playlists pl
|
||||||
|
join playlist_items pi on pi.playlist_id = pl.id
|
||||||
|
where pi.id = $1`, itemID)
|
||||||
|
return scanPlaylist(row)
|
||||||
|
}
|
||||||
|
|
||||||
func (s *PlaylistStore) UpdateDefaultDuration(ctx context.Context, id string, seconds int) error {
|
func (s *PlaylistStore) UpdateDefaultDuration(ctx context.Context, id string, seconds int) error {
|
||||||
_, err := s.pool.Exec(ctx,
|
_, err := s.pool.Exec(ctx,
|
||||||
`update playlists set default_duration_seconds=$2, updated_at=now() where id=$1`, id, seconds)
|
`update playlists set default_duration_seconds=$2, updated_at=now() where id=$1`, id, seconds)
|
||||||
|
|
@ -373,23 +385,20 @@ func (s *PlaylistStore) ListActiveItems(ctx context.Context, playlistID string)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *PlaylistStore) AddItem(ctx context.Context, playlistID, mediaAssetID, itemType, src, title string, durationSeconds int, validFrom, validUntil *time.Time) (*PlaylistItem, error) {
|
func (s *PlaylistStore) AddItem(ctx context.Context, playlistID, mediaAssetID, itemType, src, title string, durationSeconds int, validFrom, validUntil *time.Time) (*PlaylistItem, error) {
|
||||||
// Place at end of list.
|
|
||||||
var maxIdx int
|
|
||||||
s.pool.QueryRow(ctx,
|
|
||||||
`select coalesce(max(order_index)+1, 0) from playlist_items where playlist_id=$1`, playlistID,
|
|
||||||
).Scan(&maxIdx) //nolint:errcheck
|
|
||||||
|
|
||||||
var mediaID *string
|
var mediaID *string
|
||||||
if mediaAssetID != "" {
|
if mediaAssetID != "" {
|
||||||
mediaID = &mediaAssetID
|
mediaID = &mediaAssetID
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// W1: Atomare Subquery statt 2 separater Queries — verhindert Race Condition bei order_index.
|
||||||
row := s.pool.QueryRow(ctx,
|
row := s.pool.QueryRow(ctx,
|
||||||
`insert into playlist_items(playlist_id, media_asset_id, order_index, type, src, title, duration_seconds, valid_from, valid_until)
|
`insert into playlist_items(playlist_id, media_asset_id, order_index, type, src, title, duration_seconds, valid_from, valid_until)
|
||||||
values($1,$2,$3,$4,$5,$6,$7,$8,$9)
|
values($1,$2,
|
||||||
|
(select coalesce(max(order_index)+1, 0) from playlist_items where playlist_id=$1),
|
||||||
|
$3,$4,$5,$6,$7,$8)
|
||||||
returning id, playlist_id, coalesce(media_asset_id,''), order_index, type, src,
|
returning id, playlist_id, coalesce(media_asset_id,''), order_index, type, src,
|
||||||
coalesce(title,''), duration_seconds, valid_from, valid_until, enabled, created_at`,
|
coalesce(title,''), duration_seconds, valid_from, valid_until, enabled, created_at`,
|
||||||
playlistID, mediaID, maxIdx, itemType, src, title, durationSeconds, validFrom, validUntil)
|
playlistID, mediaID, itemType, src, title, durationSeconds, validFrom, validUntil)
|
||||||
return scanPlaylistItem(row)
|
return scanPlaylistItem(row)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue