https://kb.shelly.cloud/knowledge-base/shelly-plus-1pm-web-interface-guide
192.168.33.1
Id: A8032AB82968
bedroom: shellyplus1pm-a8032ab82968 → 192.168.178.53
top child room: shellyplus1pm-a8032abba0b0 → 192.168.178.54
living room: shellyplusi4-a8032ab1d7d0 → 192.168.178.55
bed room: shellyplusi4-a8032ab1c9e0 → 192.168.178.56
shellyrgbw2-E0AC83 → 192.168.178.76

FRITZ!Box 7530 GD
78302839617779115068
MQTT credentials:
192.168.178.32:1883 → 192.168.178.52:1883
note4
note4

https://shelly-api-docs.shelly.cloud/gen2/General/RPCChannels
shellyplus1pm-a8032ab82968/rpc
shellyplus1pm-a8032ab82968/events/rpc
shellyplusi4-a8032ab1d7d0/events/rpc
shellyplusi4-a8032ab1c9e0/events/rpc
shellyrgbw2-E0AC83/events/rpc
https://www.home-assistant.io/integrations/shelly/
Generation 2 devices use the values btn_down, btn_up, single_push, double_push and long_push as click_type.
http://192.168.178.53/rpc/Shelly.GetStatus
power consumption LED (green channel stuck at 100%; red LEDs not connected on one strip; in total ca. 250 LEDs):
green 100%: 7.7 W
green+blue 100%: 16.3 W
green+red 100%: 18.7 W
white = green+blue+red 100%: 26.4 W
green 100%+blue 50%+red 50%: 17.8 W
green 100%+blue 1%+red 1%: 9.7 W
nice automations:
https://community.home-assistant.io/t/shelly-plus-i4-wall-switch-example-automation/401625
example to send command to shell:
service: mqtt.publish
data:
topic: homeassistant/shellyplus1pm-fgfloodlights/rpc
payload: >-
{{ {‘id’: 1, ‘src’:‘homeassistant/shellyplus1pm-fgfloodlights/status’,
‘method’:‘Shelly.GetStatus’} | to_json }}
https://shelly-api-docs.shelly.cloud/gen2/ComponentsAndServices/Mqtt
<model> = shellyplus1pm
https://shelly-api-docs.shelly.cloud/gen1/#shelly1-1pm-mqtt
Shelly1/1PM: MQTT
Shelly1 and Shelly1PM uses the following topics, where <model> is either shelly1 or shelly1pm:
shellies/<model>-<deviceid>/relay/0 to report status: on, off or overpower (the latter only for Shelly1PM)
shellies/<model>-<deviceid>/relay/0/command accepts on, off or toggle and applies accordingly
shellies/<model>-<deviceid>/input/0 reports the state of the SW terminal
shellies/<model>-<deviceid>/longpush/0 reports longpush state as 0 (shortpush) or 1 (longpush)
shellies/<model>-<deviceid>/input_event/0 reports input event and event counter, e.g.: {"event":"S","event_cnt":2} see /status for details
Shelly1PM adds:
shellies/shelly1pm-<deviceid>/relay/0/power reports instantaneous power in Watts
shellies/shelly1pm-<deviceid>/relay/0/energy reports an incrementing energy counter in Watt-minute
shellies/shelly1pm-<deviceid>/temperature reports internal device temperature in °C
shellies/shelly1pm-<deviceid>/temperature_f reports internal device temperature in °F
shellies/shelly1pm-<deviceid>/overtemperature reports 1 when device has overheated, normally 0
shellies/shelly1pm-<deviceid>/temperature_status reports Normal, High, Very High
shellies/shelly1pm-<deviceid>/relay/0/overpower_value reports the value in Watts, on which an overpower condition is detected
https://community.home-assistant.io/t/shelly-gen-2-plus-and-pro-using-mqtt/347979/19
mqtt:
switch:
- name: "Garage: Heat pump switch"
state_topic: "garage-heat-pump-switch-pm/status/switch:0"
value_template: "{{ value_json.output }}"
state_on: true
state_off: false
command_topic: "garage-heat-pump-switch-pm/rpc"
payload_on: '{"id":1, "src": "homeassistant", "method": "Switch.Set", "params":{"id":0,"on":true}}'
payload_off: '{"id":1, "src": "homeassistant", "method": "Switch.Set", "params":{"id":0,"on":false}}'
optimistic: false
qos: 1
retain: false
sensor:
- name: "Garage: Heat pump switch temperature"
unique_id: 4ca71dd5-645d-48e5-a387-a655cc7dd42e
state_topic: "garage-heat-pump-switch-pm/status/switch:0"
value_template: "{{ value_json.temperature.tC }}"
unit_of_measurement: "°C"
device_class: temperature
- name: "Garage: Heat pump switch current power"
unique_id: 44f6d6de-be45-4697-8ff9-882fae91c6a2
state_topic: "garage-heat-pump-switch-pm/status/switch:0"
value_template: "{{ value_json.apower }}"
unit_of_measurement: "W"
device_class: power
- name: "Garage: Heat pump switch total power"
unique_id: 44f6d6de-be45-4697-8ff9-882fae91c6a1
state_topic: "garage-heat-pump-switch-pm/status/switch:0"
value_template: "{{ value_json.aenergy.total }}"
unit_of_measurement: "W"
device_class: power
https://community.openhab.org/t/shelly-plus-1pm-via-mqtt/139826
Hi folkes,
I am a bit inpatient and cannot waiting for the Shelly binding to cover the new devices and I read that due to the new ESP32 and new API, Shelly devices can now run cloud and MQTT in parallel. So if you start controlling your new devices (e.g. the Shelly Plus 1PM) via MQTT, you are not missing out on anything else. So here is my thing code for the Shelly Plus 1PM:
UID: mqtt:topic:5d0f79cab1:b0df524d88
https://sequr.be/blog/2020/10/mqtt-templates-for-shelly-devices/#mqtt-templates
## /sensors/room_x/lamp.yaml
# Input type
- platform: mqtt
name: Room X - lamp - input
expire_after: 86400
qos: 1
state_topic: shellies/shelly1pm-[SHELLY ID]/input_event/0
# Device temperature °C
- platform: mqtt
name: Room X - lamp - temperature
expire_after: 86400
qos: 1
device_class: temperature
unit_of_measurement: '°C'
icon: mdi:temperature-celcius
state_topic: shellies/shelly1pm-[SHELLY ID]/temperature
# Device temperature °F
- platform: mqtt
name: Room X - lamp - temperature F
expire_after: 86400
qos: 1
device_class: temperature
unit_of_measurement: '°F'
icon: mdi:temperature-fahrenheit
state_topic: shellies/shelly1pm-[SHELLY ID]/temperature_f
# Power consumption (live)
- platform: mqtt
name: Room X - lamp - power
expire_after: 86400
qos: 1
device_class: power
unit_of_measurement: 'W'
icon: mdi:lightning-bolt-outline
state_topic: shellies/shelly1pm-[SHELLY ID]/relay/0/power
# Power consumption (since reboot)
- platform: mqtt
name: Room X - lamp - energy
expire_after: 86400
qos: 1
device_class: energy
state_class: total_increasing
unit_of_measurement: 'Wh'
value_template: "{{ value | float / 60 }}"
icon: mdi:lightning-bolt
state_topic: shellies/shelly1pm-[SHELLY ID]/relay/0/energy
# Overpower
- platform: mqtt
name: Room X - lamp - overpower
expire_after: 86400
qos: 1
device_class: power
unit_of_measurement: 'W'
icon: mdi:flash-alert
state_topic: shellies/shelly1pm-[SHELLY ID]/overpower_value
https://usa.shelly.cloud/knowledge-base/shelly-rgbw2/
https://community.home-assistant.io/t/shelly-rgbw2-automations-mqtt/234564
shellies/shellyrgbw2-E0AC83/white/3/status
shellies/shellyrgbw2-E0AC83/color/0/status
https://www.esphome-devices.com/devices/Shelly-Plus-1PM
https://community.home-assistant.io/t/shelly-firmware-updates/123123/2
in configuration.yaml, add the rest integration and configure like so:
rest_command:
update_shelly:
url: 'http://{{ shelly_ip }}/ota?update=true'
create a new automation, eg. shellyupdate.yaml that looks like this:
- alias: "Shelly New Firmware Notification"
id: 'snfn'
trigger:
platform: mqtt
topic: shellies/announce
condition:
condition: template
value_template: "{{ trigger.payload_json['new_fw'] == true }}"
action:
- service: persistent_notification.create
data_template:
title: "New Shelly Firmware Update Released. Update will be attempted."
message: Update will be attempted"
notification_id: "{{ trigger.payload_json['id'] }}"
- service: rest_command.update_shelly
data:
shelly_ip: "{{ trigger.payload_json['ip'] }}"
- alias: "Shelly New Firmware Notification Removal"
id: 'snfnr'
trigger:
platform: mqtt
topic: shellies/announce
condition:
condition: template
value_template: "{{ trigger.payload_json['new_fw'] == false }}"
action:
service: persistent_notification.dismiss
data_template:
notification_id: "{{ trigger.payload_json['id'] }}"
http://[IP-OF-SHELLY/ota?update=1 does the trick
https://smarthome.university/home-assistant/node-red/
https://funprojects.blog/2020/03/23/home-assistant-with-node-red/
Solarman?
See KeePassXC
https://www.wemos.cc/en/latest/c3/c3_mini.html
https://www.fambach.net/d1-mini-esp8266-modul-2-3/
static const uint8_t LED_BUILTIN = 7;
#define BUILTIN_LED LED_BUILTIN // backward compatibility
static const uint8_t TX = 21;
static const uint8_t RX = 20;
static const uint8_t SDA = 8;
static const uint8_t SCL = 10;
static const uint8_t SS = 5;
static const uint8_t MOSI = 4;
static const uint8_t MISO = 3;
static const uint8_t SCK = 2;
static const uint8_t A0 = 0;
static const uint8_t A1 = 1;
static const uint8_t A2 = 2;
static const uint8_t A3 = 3;
static const uint8_t A4 = 4;
static const uint8_t A5 = 5;
Wifi funktioniert nicht → Wifi TX auf 8,5 dBm setzen:
WiFi.setTxPower(WIFI_POWER_8_5dBm);
Configure Board
Use lastest esp32 arduino package
Choose board LOLIN C3 MINI
Upload Code
Make C3 boards into Device Firmware Upgrade (DFU) mode.
Hold on Button 9
Press Button Reset
Release Button 9 When you hear the prompt tone on usb reconnection
D:\”Home Assistant”\Python\myenv\Scripts\activate.bat
pip install esptool
Make S2 boards into Device Firmware Upgrade (DFU) mode.
Hold on Button 0
Press Button Reset
Release Button 0 When you hear the prompt tone on usb reconnection
Flash using esptool.py
esptool.py --port PORT_NAME erase_flash
esptool.py --port COM20 erase_flash
esptool.py --port PORT_NAME --baud 1000000 write_flash -z 0x1000 FIRMWARE.bin
esptool.py --port PORT_NAME --baud 1000000 write_flash -z 0x1000 FIRMWARE.bin
https://codeandlife.com/2022/02/25/using-ssd1306-oled-wemos-s2-pico-esp32-s2-board/
https://microcontrollerslab.com/oled-display-raspberry-pi-pico-micropython-tutorial/
We will have to install the SSD1306 OLED library for MicroPython to continue with our project.
To successfully do that, open your Thonny IDE with your Raspberry Pi Pico plugged in your system. Go to Tools > Manage Packages. This will open up the Thonny Package Manager.
Search for “ssd1306” in the search bar by typing its name and clicking the button ‘Search on PyPI.’
From the following search results click on the one highlighted below: micropython-ssd1306
Install this library.
https://github.com/slashback100/presence_simulation
https://blog.jonsdocs.org.uk/2022/08/29/simulating-presence-with-home-assistant/
I created an toggle helper called input_boolean.holidaymode:
Go to Settings then Devices & Services
Click on the Helpers tab (in the web interace this will be at the top, on the Android app it's an icon at the bottom
Click create helper
Choose Toggle
Type a name (e.g., HolidayMode) and choose an icon (e.g., mdi:beach) and click Create
The list of helpers will now include input_boolean.holidaymode
As the automation stores the light it is going to change (switch on or off) in the "light to switch" variable, we need to create that.
Go to Settings then Devices & Services
Click on the Helpers tab (in the web interace this will be at the top, on the Android app it's an icon at the bottom
Click create helper
Choose Text
For name, type light_to_switch and leave the other options as their defaults
Click Create
The list of helpers will now include input_text.light_to_switch
Once we've created the light group we'll be referencing it by name in the automation.
Go to Settings then Devices & Services
Click on the Helpers tab (in the web interace this will be at the top, on the Android app it's an icon at the bottom
Click create helper
Choose Group (a circle with three dots in it)
When asked what type of group this is, choose Light group
Give your group a name (e.g., "Presence simulation lights") and choose the members to include in the group
Click Submit
In the helpers list you'll see a group called light.presence_simulation_lights - copy this name exactly as we'll need it in the automation
Create automation - random lights on
alias: "Holiday mode: Presence simulation"
trigger:
- platform: time_pattern
minutes: /30
condition:
- condition: state
entity_id: input_boolean.holidaymode
state: "on"
- condition: sun
after: sunset
after_offset: "-00:30:00"
- condition: time
before: "22:00:00"
action:
- delay: 00:{{ '{:02}'.format(range(0,30) | random | int) }}:00
- service: input_text.set_value
data_template:
entity_id: input_text.light_to_switch
value: "{{ state_attr('light.presence_simulation_lights','entity_id') | random }}"
- service: homeassistant.toggle
data_template:
entity_id: "{{states('input_text.light_to_switch')}}"
initial_state: true
hide_entity: false
Create automation - lights off at random bedtime
alias: "Holiday mode: Turning off all toggled lights"
description: ""
trigger:
- platform: time
at: "23:00:00"
condition:
- condition: state
entity_id: input_boolean.holidaymode
state: "on"
action:
- delay: 00:{{ range(15,59) | random | int }}:00
- service: homeassistant.turn_off
data: {}
target:
entity_id: light.presence_simulation_lights
initial_state: true
hide_entity: false
mode: single
https://blog.mornati.net/home-assistant-simple-presence-simulation-script
Add the following script in your scripts configuration file (ie scripts.yaml)
COPY
light_duration:
mode: parallel
description: "Turns on a light for a while, and then turns it off"
fields:
light:
description: "A specific light"
example: "light.bedroom"
duration:
description: "How long the light should be on in minutes"
example: "25"
sequence:
- service: homeassistant.turn_on
data:
entity_id: "{{ light }}"
- delay: "{{ duration }}"
- service: homeassistant.turn_off
data:
entity_id: "{{ light }}"
The automation will then start the script providing the correct parameters.
COPY
- id: random_away_lights
alias: "Random Away Lights"
mode: parallel
trigger:
- platform: time_pattern
minutes: "/30"
condition:
- condition: state
entity_id: input_boolean.away
state: "on"
- condition: sun
after: sunset
after_offset: "-00:30:00"
- condition: time
before: "23:59:00"
action:
service: script.light_duration
data:
light: "{{states.group.simulation_lights.attributes.entity_id | random}}"
duration: "00:{{ '{:02}'.format(range(5,30) | random | int) }}:00"
I created a group with a list of lights I want to use to simulate the presence. I put only the lights within the rooms visible from the outside.
COPY
simulation_lights:
name: Lights Presence Simulation
entities:
- light.salle_manger
- light.cuisine_table
- light.bureau_marco
- light.salon_corner
https://github.com/nielsfaber/scheduler-component
https://github.com/nielsfaber/scheduler-card
https://www.home-assistant.io/integrations/recorder
recorder:
purge_keep_days: 5
db_url: sqlite:////home/user/.homeassistant/test
https://community.home-assistant.io/t/simple-way-to-reduce-your-db-size/234787
open up a terminal (SSH or via web, I use this extension for this purpose https://github.com/hassio-addons/addon-ssh 130)
change directory to config cd ~/config
open sqlite shell sqlite3 home-assistant_v2.db
enter the following commands in the shell:.header on.mode column.width 50, 10,SELECT entity_id, COUNT(*) as count FROM states GROUP BY entity_id ORDER BY count DESC LIMIT 20;
https://community.home-assistant.io/t/how-to-keep-your-recorder-database-size-under-control/295795
homeassistant:
allowlist_external_dirs:
- /config
sensor:
- platform: filesize
file_paths:
- /config/home-assistant_v2.db

SCENES = {
1: "Ocean",
2: "Romance",
3: "Sunset",
4: "Party",
5: "Fireplace",
6: "Cozy",
7: "Forest",
8: "Pastel Colors",
9: "Wake up",
10: "Bedtime",
11: "Warm White",
12: "Daylight",
13: "Cool white",
14: "Night light",
15: "Focus",
16: "Relax",
17: "True colors",
18: "TV time",
19: "Plantgrowth",
20: "Spring",
21: "Summer",
22: "Fall",
23: "Deepdive",
24: "Jungle",
25: "Mojito",
26: "Club",
27: "Christmas",
28: "Halloween",
29: "Candlelight",
30: "Golden white",
31: "Pulse",
32: "Steampunk",
1000: "Rhythm",
}
https://iotprojectsideas.com/portable-esp32-wifi-repeater/
download repository and flash tool:
https://github.com/martin-ger/esp32_nat_router
https://www.espressif.com/en/support/download/other-tools
run the tool and select the three files in the folder D:\Home Assistant\ESP32_wifi_repeater\esp32_nat_router-master\build\esp32: bootloader.bin, firmware.bin, partitions.bin
Now, we also need to specify the hex code indicating where the files are. For the bootloader type 0x1000, for esp32 nat router file type 0x10000, and for partition file type 0x8000.
Press and hold the boot button on your ESP32 board and click on the start button to start flashing firmware.

After the first boot, it provides an open WiFi SSID “ESP32_NAT_Router“. Connect to this WiFi network and perform basic configuration via a simple web interface.
The web interface allows for the configuration of all the parameters required for basic forwarding functionality. Open your web browser and enter the following address: “http://192.168.4.1“. Now you should see the following page.
Firstly, in the “STA Settings” enter the correct WiFi credentials of your main WiFi network that you want to extend. Leave the password field for open networks. Click on “Connect“. The ESP32 reboots and will connect to your WiFi router. You should see the status LED ON after a few seconds.
You can now reload the page and change the “AP Settings“. Enter New SSID and Password and click “Set” and again the ESP reboots. Now it is ready for forwarding traffic over the newly configured Access Point.
SSID: ESP32_NAT_Router
pass: mysupersecurepassword
192.168.178.52
pi
raspberry
https://raspberrytips.com/docker-on-raspberry-pi/
sudo apt update
sudo apt upgrade -y
sudo reboot
curl -sSL https://get.docker.com | sh
Allow Docker to be used without being a root → So, here is the command to add the current user to the docker group: sudo usermod -aG docker $USER → sudo usermod -aG docker pi
Exit your SSH session, or restart the Raspberry Pi, and you should then be able to run any docker command without sudo. → docker ps → If it works, you are ready to move forward.
Test your Docker setup: docker run hello-world
Monitor the running containers:
docker ps
Display the current version of Docker:
docker version
Download a new image:
docker pull [IMAGE]
Run an image (and download it if not existing on your local system):
docker run [IMAGE]
Search for an image in the Docker repository:
docker search [X]
Show the usage statistics:
docker stats
Display the list of all the Docker commands:
docker help
https://www.home-assistant.io/installation/raspberrypi#install-home-assistant-container
docker run -d \
--name homeassistant \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Berlin \
-v /PATH_TO_YOUR_CONFIG:/config \
--network=host \
ghcr.io/home-assistant/home-assistant:stable
Once the Home Assistant Container is running Home Assistant should be accessible using http://<host>:8123
http://192.168.178.52:8123
RESTART HOME ASSISTANT
docker restart homeassistant
https://community.home-assistant.io/t/configurator-file-editor-for-ha-core-in-docker/238472/4
cd ~/docker
mkdir configurator
cd configurator
sudo nano docker-compose.yaml
version: "3.5"
services:
configurator:
container_name: configurator
image: causticlab/hass-configurator-docker:latest
restart: always
network_mode: host
labels:
- "com.centurylinklabs.watchtower.enable=true" # for Watchtower automatic updates
ports:
- "3218:3218/tcp"
volumes:
- ${HASSIODIR}/:/config # map this volume to your hassio config directory
environment:
- HC_BASEPATH=/config
- HC_HASS_API_PASSWORD=${CONFIGURATORPSWD} #Create a Long-Lived Access Token
- HC_IGNORE_SSL=True
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}'
docker-compose up -d
http://192.168.178.52:3218
https://community.home-assistant.io/t/configure-ssl-with-docker/196878
edit configuration.yaml:
http:
base_url: https://myhomeassistant.com:8123
ssl_certificate: /config/fullchain.pem
ssl_key: /config/privkey.pem
create certifactes:
cd /PATH_TO_YOUR_CONFIG
sudo openssl req -sha256 -addext "subjectAltName = IP:192.168.178.52" -newkey rsa:4096 -nodes -keyout privkey.pem -x509 -days 730 -out fullchain.pem
Home Assistant in Docker with Nginx and Let's Encrypt on Raspberry Pi
cd ~/docker
mkdir proxy
cd proxy
sudo nano docker-compose.yaml
version: '3'
services:
nginx:
image: arm64v8/nginx
ports:
- "80:80"
volumes:
- ./data/nginx:/etc/nginx/conf.d:ro
- ./data/wwwroot:/var/www/root:ro
mkdir data
cd data
mkdir nginx
cd nginx
sudo nano app.conf
server {
listen 80;
server_name habora.duckdns.org; #replace this
location / {
root /var/www/root;
}
}
cd ..
mkdir wwwroot
cd wwwroot
sudo nano index.html
<html>
<body>
<h1>Welcome</h1>
It works!
</html>T
cd ~/docker/proxy
docker-compose up -d
http://192.168.178.52:80/
cd ~/docker/proxy
sudo nano docker-compose.yaml
version: '3'
services:
nginx:
image: arm64v8/nginx
ports:
- "80:80"
- "443:443" # added
volumes:
- ./data/nginx:/etc/nginx/conf.d:ro
- ./data/wwwroot:/var/www/root:ro
- ./data/certbot/conf:/etc/letsencrypt:ro # added
- ./data/certbot/www:/var/www/certbot:ro # added
certbot: # added
image: certbot/certbot:arm64v8-latest # added
volumes: # added
- ./data/certbot/conf:/etc/letsencrypt # added
- ./data/certbot/www:/var/www/certbot # added
cd ~/docker/proxy/data/nginx
sudo nano app.conf
server {
listen 80;
server_name habora.duckdns.org; # replace this
location /.well-known/acme-challenge/ { # added
root /var/www/certbot; # added
} # added
location / {
root /var/www/root;
}
}
server {
listen 443 ssl;
server_name habora.duckdns.org;
location / {
root /var/www/root;
}
ssl_certificate /etc/letsencrypt/live/habora.duckdns.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/habora.duckdns.org/privkey.pem;
#Optional: Only works with Philipp's script (see below)
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
cd ~/docker/proxy
curl -L https://raw.githubusercontent.com/wmnnd/nginx-certbot/master/init-letsencrypt.sh > init-letsencrypt.sh
sudo nano init-letsencrypt.sh
Edit the script to add in your domain(s) and your email address. If you’ve changed the directories of the shared Docker volumes, make sure you also adjust the data_path variable as well.
Email: boraers@googlemail.com, https://habora.duckdns.org
chmod +x init-letsencrypt.sh
sudo ./init-letsencrypt.sh
docker-compose up -d
http://habora.duckdns.org:80/
rm docker-compose.yaml
sudo nano docker-compose.yaml
version: '3'
services:
nginx:
image: arm64v8/nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./data/nginx:/etc/nginx/conf.d:ro
- ./data/wwwroot:/var/www/root:ro
- ./data/certbot/conf:/etc/letsencrypt:ro
- ./data/certbot/www:/var/www/certbot:ro
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot:arm64v8-latest
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
http://192.168.178.52:9000/
https://nginxproxymanager.com/guide/#quick-setup
cd ~/docker/proxy
sudo rm * -R
sudo nano docker-compose.yml
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
docker-compose up -d
https://theprivatesmarthome.com/how-to/set-up-nginx-proxy-manager-in-home-assistant/
open the admin page:
http://127.0.0.1:81
http://192.168.178.52:81
Email: admin@example.com
Password: changeme
add prox host:
domain names: habora.duckdns.org
scheme: http
forward hostname / ip: 192.168.178.52:8123
forward port: 8123
ache asset: false
block common explots: true
websockets support: true
access list: publicl accessible
SSL
“request a new ssl certificate”
force SSL: true
edit configuration.yaml
http:
use_x_forwarded_for: true
trusted_proxies:
- 172.16.0.0/12
these options work now ==>
http://192.168.178.52:8123/lovelace/0
https://habora.duckdns.org/lovelace/0
https://www.addictedtotech.net/nginx-proxy-manager-tutorial-duckdns-configuration-episode-7/
STEP 1: SET UP A DUCKDNS ACCOUNT.
The First thing to do will be to set up a DuckDNS account which is easy.
Just navigate to their homepage and log in using one of the many sign in options they offer. In our example we use Google.

https://www.duckdns.org
STEP 2: ENTER A DUCKDNS SUBDOMAIN.
Once logged in we are going to create a subdomain by entering into the white box a name you would like to use for your service.
Note: You will need to create a new subdomain for each docker container service you host.
In our example, we just put in “a2t“. Then click on the green “add domain” button.

This now gives us a domain name to use. In our case it is a2t.duckdns.org.
The DuckDNS service will automatically take the public IP address you are currently on and add this to the IP field. If you are using a VPN, proxy or are using any other network that is different from the one you want to host your service on you will need to update this IP manually to start with to ensure the correct IP address is used. (This will be auto-updated later by our DuckDNS container either way).
STEP 3: CREATE AND DEPLOY THE DUCKDNS CONTAINER USING A STACK.
Now we have our subdomain we are going to “log in” to our “Portainer” dashboard on our Raspberry Pi and navigate to the “Stacks” page:
http://192.168.2.5:9000/#!/1/docker/stacks

From there we are going click on the “Add stack” button.
This will open up a new Stack creation window. We will then name our stack “duckdns“

Then in the Web editor we will paste the following Docker compose data into the empty field.
DOCKER COMPOSE STACK:
---
version: "2.1"
services:
duckdns:
image: ghcr.io/linuxserver/duckdns
container_name: duckdns
environment:
- PUID=1000 #optional
- PGID=1000 #optional
- TZ=Europe/London
- SUBDOMAINS=subdomain1,subdomain2
- TOKEN=token
- LOG_FILE=false #optional
volumes:
- /path/to/appdata/config:/config #optional
restart: unless-stopped
---
version: "2.1"
services:
duckdns:
image: ghcr.io/linuxserver/duckdns
container_name: duckdns
environment:
#- PUID=1000 #optional
#- PGID=1000 #optional
- TZ= Europe/Berlin
- SUBDOMAINS=habora #subdomain1,subdomain2
- TOKEN=799093a4-0b34-454f-99cb-25a4637bf404
- LOG_FILE=false #optional
volumes:
- /path/to/appdata/config:/config #optional
restart: unless-stopped
You will then need to change the fields to match your installation. If you would like to use a specific user account then you will need to find the PUID and GUID of that user account. We have shown how to do this in our Youtube Video so please watch that. If you would like to go with the defaults just remove both these fields as they are optional.
Set your timezone “TZ” to your current location.
Add your subdomain name to the “SUBDOMAIN” field. If you have more than one you will need to add an entry for each subdomain you wish to use and separate them with a comma.
Note you do not need to add the full domain name only the subdomain part. In our example, we would only put “a2t” into the “SUBDOMAIN” field. not a2t.duckdns.org.
Add your Token to the TOKEN field, which can be found on the Duckdns subdomain creation page at the top right. This is unique to every user and only needs to be put in once regardless of how many subdomains you use.

If you would like to use logs then you can change the field to “true” this is optional.
Under Volumes add the location of where you install all your Docker data.
Now you have set them fields your Docker compose Stack should look something like this:

Now you have confirmed all is set up correctly you can press the “Deploy the stack” button.

You can now check the Portainer containers page to confirm the “duckdns” container has been created correctly.

Press the Logs button to check all is as expected. It should look like this:

To confirm your domain is working correctly you can open a browser window and enter your domain name into the address field.
http://a2t.duckdns.org
You should now see this:

https://www.schaerens.ch/raspi-setting-up-mosquitto-mqtt-broker-on-raspberry-pi-docker/
Install Docker on Raspberry Pi
curl -sSL https://get.docker.com | sh
Add user pi to group docker:
sudo usermod -aG docker pi
Install Docker Compose (first install Python and Pip)
sudo apt-get install libffi-dev libssl-dev python3-dev python3 python3-pip -y
sudo pip3 install docker-compose
sudo reboot
Create the following directory tree
sudo mkdir /docker
sudo mkdir /docker/mosquitto
sudo mkdir /docker/mosquitto/config
Create the config file for Mosquitto with the following content:
sudo nano /docker/mosquitto/config/mosquitto.conf
# Config file for mosquito
listener 1883
#protocol websockets
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
allow_anonymous false
Create the config file for docker-compose with the following content (pay attention to the indentation of the lines in the YAML file, use 4 spaces per indentation, no tabs):
cd /docker
sudo nano docker-compose.yaml
version: '3'
services:
mosquitto:
container_name: mosquitto
restart: always
image: eclipse-mosquitto
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto/config/mosquitto.conf:/mosquitto/config/mosquitto.conf
- ./mosquitto/data:/mosquitto/data
- ./mosquitto/log:/mosquitto/log
networks:
- default
networks:
default:
version: "3"
services:
mosquitto:
image: eclipse-mosquitto
network_mode: host
volumes:
- ./conf:/mosquitto/conf
- ./data:/mosquitto/data
- ./log:/mosquitto/log
sudo nano /docker/mosquitto/config/mosquitto.conf
## Config file for mosquito
listener 1883
#protocol websockets
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
#allow_anonymous false
password_file /mosquitto/config/mosquitto.conf
docker-compose exec mosquitto mosquitto_passwd -c /mosquitto/config/mosquitto.passwd mosquitto
sudo apt install docker-compose
pip3 install --upgrade requests
docker run -d -p 8080:80 --name webserver nginx
docker rm mosquitto
sudo docker run -d -it --name mosquitto -p 127.0.0.1:1883:1883 eclipse-mosquitto
sudo docker run -d -it --name mosquitto -p 8001:8001 myserver_new
https://community.openhab.org/t/mosquitto-error-address-already-in-use/121506
Now you can install and start Mosquitto:
docker-compose up -d
Check if Mosquitto is running:
docker ps
https://hub.docker.com/_/eclipse-mosquitto/
docker pull eclipse-mosquitto
https://medium.com/himinds/mqtt-broker-with-secure-tls-and-docker-compose-708a6f483c92
https://www.diyhobi.com/install-mqtt-and-openhab-3-in-docker-raspberry-pi-4/
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker pi
sudo apt-get install libffi-dev libssl-dev python3-dev python3 python3-pip -y
sudo pip3 install docker-compose
sudo reboot
cd
mkdir docker
cd docker
mkdir smarthome
mkdir smarthome/mqtt
mkdir smarthome/mqtt/config
sudo nano smarthome/mqtt/config/mosquitto.conf
# Config file for mosquitto
listener 1883
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
allow_anonymous false
cd smarthome
nano docker-compose.yaml
version: '3.5'
services:
#mqtt
mosquitto:
container_name: mqtt
#hostname: mosquitto
restart: always
image: eclipse-mosquitto
ports:
- "8883:8883"
- "9001:9001"
volumes:
- ./mqtt/config/mosquitto.conf:/mosquitto/config/mosquitto.conf
- ./mqtt/data:/mosquitto/data
- ./mqtt/log:/mosquitto/log
networks:
- default
networks:
default:
docker-compose up -d
docker ps
cd ~/docker/smarthome
sudo rm * -R
cd mqtt
cd config
ls
sudo rm mosquitto.conf -R
sudo nano mosquitto.conf
# Config file for mosquitto
listener 1883
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
allow_anonymous false
cd ~/docker/smarthome
docker-compose up -d
docker exec -it mqtt sh
mosquitto_passwd -c /mosquitto/data/pwfile mymqtt
Username: mymqtt, Password: mypassword
exit
sudo nano ~/docker/smarthome/mqtt/config/mosquitto.conf
Paste this at the bottom: password_file /mosquitto/data/pwfile
docker start mqtt
docker ps
192.168.178.52:1883
https://darkwolfcave.de/raspberry-pi-docker-ohne-probleme-installieren/
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
docker ps
https://IP-DEINES-RASPBERRYS:9443 → https://192.168.178.52:9443
user: admin, pass: password1234
sudo mkdir /opt/influxdb
sudo mkdir /opt/grafana
sudo chmod 775 /opt/influxdb/ /opt/grafana/
influxdb:1.8
8086
/var/lib/influxdb
grafana/grafana
3000
/var/lib/grafana (was leider nur auf der Grafana Seite direkt steht und nicht hier)
Starte, am besten in einem neuen Tab, deine Portainer-Umgebung (https://IP-DES_RASPBERRY:9443) und logge dich ein.
In der linken Menüseite wähle den Punkt „Containers“ und dann klicke auf den Button “ + Add container„:

Lass dich von den Einstellungen nicht erschlagen. Nicht alle benötigen wir, und alle anderen werden nach und nach klarer.

Als Erstes geben wir unserem Container einen Namen(1): influxDB
Dann suchen wir das entsprechende Image (2) bei „DockerHub“ : influxdb:1.8
Jetzt klickst du auf den Button publish a new network port(3) und trägst bei Host(4) und Container(5) jeweils den Port 8086 ein

Unter dem Punk „Command & logging“ solltest du für die Console „Interactive & TTY“ auswählen.

Etwas weiter unten klickst du jetzt auf „Volumes(1)„, dann auf „map additional volume(2)“ sowie auf den Button „Bind(3)„.
Du erinnerst dich noch an den Pfad, den wir auf der hub.docker Seite unter influxDB in der Beschreibung gesehen haben?
Denn diesen müssen wir jetzt bei „container(4)“ eintragen: /var/lib/influxdb.
Bei „Host(5)“ kommt jetzt das Gegenstück dazu rein, nämlich der Ordner, den wir für die persistenten Daten auf unserem Raspberry angelegt haben: /opt/influxdb
Im Reiter „Restart policy“ geben wir noch an, wie sich unser Container verhalten soll, falls der Raspberry mal neu startet oder der Container selbst sich mit einem Fehler beendet hat. Hier wählen wir „Always“ – er soll also immer wieder selbst neu starten.

Nun haben wir soweit alles angegeben was wir benötigen und können, wieder etwas weiter oben, auf den Button „Deploy the container“ klicken.
Das Ganze dauert dann ein wenig, da zu erst das Image heruntergeladen, entpackt und der Container entsprechend angelegt werden muss. Beim nächsten Deploy des Containers würde es deutlich schneller gehen:

Hiermit hast du jetzt influxDB erfolgreich als Container gestartet. Prüfen kannst du das natürlich auch. Im Menüpunkt Containers wirst du in der Übersicht einen neuen influxDB Container im Status running sehen. Schaue auch ruhig einmal in die Logs.
user:bora, pass:password1234, orga:home, bucket:influxdb_rapi4
token_01: GK_kb2fTPaEknWQ7c9c5VRU5c5GeRXv8is3_e0qhn9qXLOdbxHkdAfqYZNrfn1jexfQ-RVKYtX7Co9HvKgIJqg==
https://diyi0t.com/visualize-mqtt-data-with-influxdb-and-grafana/

https://thenewstack.io/python-mqtt-tutorial-store-iot-metrics-with-influxdb/

https://darkwolfcave.de/raspberry-pi-monitoring-grafana-installieren/

(1) – Name des Containers: „Grafana„
(2) – Name des Image: „grafana/grafana„
(3) – Anklicken um neue Ports eingeben/binden zu können
(4) – Port des Hosts(Raspberry): 3000
(5) – Port des Containers: 3000
(6) – Button „Volumes“ anklicken
(7) – Den Button bei Volume mapping klicken damit wir die Pfade eingeben können
(8) – Button „Bind“ anklicken
(9) – Pfad des Containers: /var/lib/grafana
(10) – Pfad auf dem Host(Raspberry) dazu: /opt/grafana
Im Reiter „Restart policy“ geben wir jetzt noch an, wie sich unser Container verhalten soll, falls der Raspberry mal neu startet oder der Container selbst sich mit einem Fehler beendet hat. Hier wählen wir „Always“ – er soll also immer wieder selbst neu starten.
Unter dem Punk „Command & logging“ solltest du für die Console „Interactive & TTY“ auswählen.
(11) – Button „Deploy the container“ klicken
http://IP-DEINES-RASPBERRY:3000 → http://192.168.178.52:3000
Mit dem Default User und Passwort (admin/admin) kannst du dich einloggen und direkt das Passwort ändern.
Wir loggen uns jetzt wieder in der Grafana WebGui auf dem Raspberry ein (IP-DEINES_RASPBERRYS:3000).
Hier wählen wir auf der linken Seite das Zahnrad und „Data sources“ aus, dann klicken wir auf den Button „Add data source“ und sagen zum Schluss wir wollen eine InfluxDB hinzufügen:



Jetzt müssen wir nur noch ein paar Angaben zu der Datenbank machen. Damit es einfacher zu handhaben ist, verzichte ich hier vollkommen auf Username / Passwörter. Bedeutet die Datenbank ist frei zugänglich. Da dies alles nur in unserem Netzwerk läuft, ist das kein Problem. Bedenke aber das man unter anderen Umständen immer User und Passwörter vergeben sollte.
Zurück zu unseren Einstellungen.
Unter Name kannst du einen Namen vergeben der dann später als Quelle in deinem Dashboard auswählbar ist.
Im Bereich HTTP und URL gibst du die IP deines Raspberrys ein auf dem die Datenbank läuft, gefolgt von dem Port 8086
Die restlichen Einstellungen können so bleiben.

Etwas weiter unten musst du noch den Datenbank Namen angeben aus dem die Daten gelesen werden sollen. Du erinnerst dich? Wir haben diesen weiter oben in die telegraf.conf eingetragen. In meinem Fall als „raspberry_live„.
Über den Button „Save & test“ prüfen wir die Verbindung. Ein grüner Haken und „Data source is working„, zeigt uns an, das alles funktioniert hat:

Was wäre ein Raspberry Pi Monitoring ohne ein Dashboard?
Damit wir bei Grafana ein solches sehen können, müssten wir uns selbst eins erstellen oder – was ich hier bevorzuge – ein fertiges Dashboard importieren.
Du kannst dir auf der Grafana Labs Seite alle verfügbaren Dashboards ansehen und entsprechend suchen. Wir wollen ja ein speziell für einen Raspberry Pi erstelltes nutzen, daher suchen wir auch nach „Raspberry“.
Ich denke du wirst dann bei deiner Suche direkt auf dieses hier treffen: Raspberry Pi Monitoring

Dieses nehmen wir auch direkt. Wie?! Ganz einfach: Kopiere dir die ID oben Rechts „10578„.
Dann öffnest du bei dir deine Grafana Umgebung (IP-DEINES_RASPBERRYS:3000) und gehst über den Menüpunkt „Dashboards“ auf „+ Import„:

Hier trägst du die gerade kopierte oder gemerkte ID des Dashboards ein (10578) und betätigst den „Load“ Button.

Jetzt könntest du den Namen des Dashboards ändern und musst auf jeden Fall die Verbindung zu den Daten – also zu der influxDB – angeben.
Diese hatten wir ja gerade eingerichtet und du solltest sie in dem Drop-Down-Feld auswählen können. In meinem Fall „Raspberry Pi Monitoring“

Ist alles eingestellt, klicken wir auf den „Import“ Button und sehen einige Sekunden später bereits unser Dashboard mit Daten.
Ab jetzt kannst du dich einfach mal in Ruhe durch alle Punkte klicken und dir das Dashboard anschauen. Je länger dein Raspberry läuft, je mehr Daten erscheinen. Oben Rechts kannst du die Aktualisierungsrate einstellen. Default ist 1 Minute.
Herzlichen Glückwunsch, dein Raspberry Pi Monitoring ist somit fertig und funktionsfähig. Viel Spaß damit!

https://www.youtube.com/watch?v=gEIgg5zHuIU
the influxdb data explorer, select a quer and cop from the script editor
https://darkwolfcave.de/raspberry-pi-monitoring-grafana-installieren/
Willkommen zurück! Na?! Kopf wieder etwas abgekühlt und aufnahmefähig? 🙂
Dann lass uns direkt weitermachen und die letzten paar Dinge erledigen.
Was jetzt kommt, wird wieder direkt auf dem Raspberry installiert. Also nicht als Container.
Dazu wie immer per SSH mit deinem pi verbinden.
Damit wir Zugriff auf die Quelle von influxDB haben, um uns Pakete herunterladen zu können, besorgen wir uns einen Key und speichern ihn auf dem Raspberry:
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
sudo rm -f /etc/apt/trusted.gpg.d/influxdb.gpg
Dann aktualisieren wir unsere Quellen und installieren apt-transport-https, was wir für den weiteren Schritt benötigen.
sudo apt-get update && sudo apt-get install apt-transport-https
Um später auch Updates erhalten zu können, fügen wir noch einen Eintrag in unserer Paketquelle hinzu:
sudo echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdb.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
Final installieren wir endlich Telegraf:
sudo apt-get update && sudo apt-get install telegraf

Es sollte jetzt von repos.influxdata.com das Paket für telegraf heruntergeladen und installiert werden.
Für mehr Informationen, und weiterführende Konfigurationen, schaue dir die Installationsanleitung von Telegraf an.
Uns fehlen noch ein paar Konfigurationen, damit wir für unser Raspberry Pi Monitoring auch alle benötigten Informationen bekommen, und in unserer Datenbank speichern können.
Dafür öffnen wir eine config Datei von telegraf:
sudo nano /etc/telegraf/telegraf.conf
Viel Spaß beim scrollen…. ja diese Datei ist WIRKLICH gefühlt unendlich lang. Aber keine Sorge, wir machen da nicht sehr viel mit und müssen es auch für unsere Anforderungen nicht.
Suche mal nach „OUTPUT PLUGINS“ und dann nach [[output.influxdb]]. Unter diesem Punkt geben wir jetzt an, wo sich unsere influxdb-Datenbank befindet.
Bei dem letzten auskommentierten (#) urls Eintrag entfernen wir einfach die Raute(#) und können es im Prinzip so lassen. Die 127er IP ist der localhost. Da die influxdb ja direkt auf dem Raspberry läuft, kann diese so erreicht werden.
Falls du noch einen zweiten Raspberry hast, würdest du bei diesem hier die IP des Hosts eingeben, auf dem die influxdb-Datenbank läuft.
Ein wenig unter diesem Eintrag entfernen wir auch die Raute(#) vor dem „database = “ und geben unserer Datenbank einen Namen.
Diese wird dann später automatisch angelegt und wir brauchen uns da nicht selbst drum kümmern.
Ich habe meine hier „raspberry_live“ genannt, damit ich später weiß von welchem meiner raspberrys die Daten sind. Bei einem weiteren pi würde ich zum Beispiel „raspberry_test“ oder sowas nehmen.

Ich greife jetzt einen Schritt vor, denn ich nutze für mein Raspberry Pi Monitoring bei Grafana ein fertiges Dashboard. Und der Entwickler gibt noch ein paar Parameter an, die man in die telegraf.conf eintragen sollte. Daher scrollen wir jetzt sehr weit nach unten bis du „INPUT PLUGINS“ sehen kannst. Direkt da drunter fügst du dann folgendes ein:
#In order to monitor both Network interfaces, eth0 and wlan0, uncomment, or add the next:
[[inputs.net]]
[[inputs.netstat]]
[[inputs.file]]
files = ["/sys/class/thermal/thermal_zone0/temp"]
name_override = "cpu_temperature"
data_format = "value"
data_type = "integer"
[[inputs.exec]]
commands = ["/opt/vc/bin/vcgencmd measure_temp"]
name_override = "gpu_temperature"
data_format = "grok"
grok_patterns = ["%{NUMBER:value:float}"]

Jetzt speichere die Datei und verlasse sie. Denn hier sind wir erstmal mit fertig.
Damit wir auf dem Raspberry auch auf die Werte von GPU usw. zugreifen dürfen, müssen wir den telegraf User noch in eine Gruppe hinzufügen:
sudo usermod -G video telegraf
Noch haben unsere Änderungen keine Wirkung, daher starten wir den Service telegraf einmal neu:
sudo service telegraf restart
Ab jetzt sollten Daten gesammelt und in die Datenbank geschrieben werden.
Was weiterhin fehlt ist das Dashboard, damit wir auch etwas sehen, und die Konfiguration zur Datenbank, damit Grafana auch weiß woher die Daten kommen sollen!
https://docs.influxdata.com/influxdb/v2.6/write-data/no-code/use-telegraf/manual-config/
create database/bucket in influxdb
then, edit telegraf.conf as described in section above but use bucket, token, etc.
need a section [[output.influxdb_v2]]