21 Commits

Author SHA1 Message Date
Valentin Lab e8d6258d4f fix: [0km] display resident memory in ``vps-stats`` 3 months ago
Valentin Lab b341fe0934 fix: correct documentation about ``vps odoo restore`` 3 months ago
Boris Gallet c6ec794d4b new: [myc-update] add hard limit to system journal usage to 200M 9 months ago
Valentin Lab f46f0a9415 fix: dev: [vps] small typo in error message of ``vps backup`` 6 months ago
Valentin Lab 4f3300a93b fix: [0km] repair invalid check for ``compose --debug up`` requirement 8 months ago
Valentin Lab b87b36303d new: [vps,0km] add ``disk`` resource to ``vps stats`` and ``0km vps-stats`` 8 months ago
Valentin Lab 9f0b9908c8 new: [0km] add ``vps-subscribe {add,rm} CHANNEL TOPICS`` actions 9 months ago
Valentin Lab 1a99fdd47b fix: [0km] improve backup check detection 9 months ago
Valentin Lab ea028885ad chg: update link address for deployment 9 months ago
Valentin Lab 4fcbb0007b new: [0km] add support of ``--ignore-{domain,ping}-check`` options to pass to ``vps`` command 9 months ago
Valentin Lab 715e5984a3 chg: [vps] improve ``no-matching-entries`` bug detection 9 months ago
Valentin Lab 83c41f54f2 fix: [vps] add more code to drop indexes and mitigate some migration issues 10 months ago
Valentin Lab 89b977c9e0 new: doc: update the way to clear ``compose`` caches 10 months ago
Valentin Lab 6bea532295 fix: dev: [vps] correct typo !cosmetic 10 months ago
Valentin Lab 1f42339d22 new: [vps] make ``vps backup`` compatible with new ``cron`` containers 10 months ago
Valentin Lab 503656f461 new: [myc-update] check remaining space before starting 10 months ago
Valentin Lab 9d8843e94e new: [vps] make ``restore`` check-fix odoo container before restore 11 months ago
Valentin Lab bf0a86fb02 new: [myc-update] allow full directory ``cron.*`` coverage of cron script deployment 11 months ago
Valentin Lab df944d18e2 new: [myc-update] add larger number for ``fs.inotify.max_user_watches`` 11 months ago
Valentin Lab 0355cfd9b7 new: [myc-update] add sysctl scripts installation facility 11 months ago
Valentin Lab c24430d66d new: [myc-update] set default color mode for ``ls`` command 11 months ago
  1. 110
      README.org
  2. 297
      bin/0km
  3. 62
      bin/myc-update
  4. 107
      bin/vps
  5. 1
      etc/sysctl.d/90-inotify_watches

110
README.org

@ -67,7 +67,7 @@ Depuis le VPS, en root:
#+BEGIN_SRC sh #+BEGIN_SRC sh
export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron
wget https://justodooit.myceliandre.fr/r/deploy -qO - | bash
wget https://git.myceliandre.fr/Myceliandre/myc-manage/raw/branch/master/bin/myc-install -qO - | bash
#+END_SRC #+END_SRC
Si vous souhaitez positionner le nom de domaine: Si vous souhaitez positionner le nom de domaine:
@ -75,7 +75,7 @@ Si vous souhaitez positionner le nom de domaine:
#+BEGIN_SRC sh #+BEGIN_SRC sh
export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron
export DOMAIN=myhost.com export DOMAIN=myhost.com
wget https://justodooit.myceliandre.fr/r/deploy -qO - | bash
wget https://git.myceliandre.fr/Myceliandre/myc-manage/raw/branch/master/bin/myc-install -qO - | bash
#+END_SRC #+END_SRC
***** Déploiement de la solution pour mailcow ***** Déploiement de la solution pour mailcow
@ -84,7 +84,7 @@ wget https://justodooit.myceliandre.fr/r/deploy -qO - | bash
#+BEGIN_SRC sh #+BEGIN_SRC sh
export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron
wget https://justodooit.myceliandre.fr/r/deploy -qO - | bash
wget https://git.myceliandre.fr/Myceliandre/myc-manage/raw/branch/master/bin/myc-install -qO - | bash
#+END_SRC #+END_SRC
****** Sur un vps avec mailcow de pre-installé ****** Sur un vps avec mailcow de pre-installé
@ -93,7 +93,7 @@ wget https://justodooit.myceliandre.fr/r/deploy -qO - | bash
#+BEGIN_SRC sh #+BEGIN_SRC sh
export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron export WITHOUT_DOCKER_CLEAN=1 ## only if you want to remove docker clean from cron
export NO_DOCKER_RESTART=1 ## Can use that only if mailcow is pre-existing export NO_DOCKER_RESTART=1 ## Can use that only if mailcow is pre-existing
wget https://justodooit.myceliandre.fr/r/deploy -qO - | bash
wget https://git.myceliandre.fr/Myceliandre/myc-manage/raw/branch/master/bin/myc-install -qO - | bash
#+END_SRC #+END_SRC
**** Hôte macosx **** Hôte macosx
@ -460,7 +460,7 @@ vps odoo restore db.zip
vps odoo restore /tmp/db.zip -s odoo2 vps odoo restore /tmp/db.zip -s odoo2
## restore 'odoodev' database of default 'odoo' service ## restore 'odoodev' database of default 'odoo' service
vps odoo restore /tmp/db.zip -d odoodev
vps odoo restore /tmp/db.zip -D odoodev
## restore on database of default 'odoo' service, and neutralize the restored base ## restore on database of default 'odoo' service, and neutralize the restored base
vps odoo restore -n /tmp/db.zip vps odoo restore -n /tmp/db.zip
@ -1165,6 +1165,102 @@ suivants: =mailcow=, =postfix=, =rspamd=, =redis=, =crypt=, =vmail=,
0km vps-backup recover myadmin@core-06.0k.io:10023#mail.mybackupedvps.com:postfix mynewvps.com 0km vps-backup recover myadmin@core-06.0k.io:10023#mail.mybackupedvps.com:postfix mynewvps.com
#+end_src #+end_src
** Usage de l’alerting
Une commande ~send~ est fourni pour envoyer des alertes via [[https://docs.ntfy.sh/][ntfy]], et
par défaut, enverra ces notifications sur http://ntfy.0k.io .
Par défaut, le VPS dispose d'un topic qui lui est propre (et surlequel
il a les droits de publication). Et il est possible de rediriger les
différent type de notification vers les topics que l'on souhaite pour
permettre une administration de ces VPS.
Donc, pour les admin, via la commande ~0km vps-subscribe~, on pourra
facilement gérer l'addition de nouveau topic et en même temps
s'assurer de donner les droit au VPS de publication sur ces topics.
*** Commande ~send~
Sur le VPS, la commande ~send~ permet d’envoyer un message sur un ou
plusieurs topics. (voir ~send --help~ pour plus d’info). La fonction
envoie un message sur un channel, et ce message est envoyé à tous les
topics associés à ce channel.
**** Configuration du serveur de notification
La configuration du serveur de notification et les identifiant unique
du VPS sont dans le fichier ~/etc/ntfy/ntfy.conf~.
**** Configuration des topics
La configuration des channels/topics est faite dans le fichier
~/etc/ntfy/topics.yml~.
Exemple:
- Configuration par défaut
#+begin_src yaml
.*\.(emerg|alert|crit|err|warning|notice):
- ${LOGIN}_main
#+end_src
On pourra utiliser:
#+begin_src sh
send -c backup.crit "no backup done in 24h"
#+end_src
qui sera notifié sur le serveur =ntfy= de la configuration en cours, dans le
topic ~${LOGIN}_main~. Ainsi que tout autre message dans les channels se terminant
par ~.emerg~, ~.alert~, ... etc...
- Configuration spécifique
#+begin_src yaml
.*\.alert:
- elabore_alert
#+end_src
Si on **ajoute** la précédente configuration, la commande suivante:
#+begin_src sh
send -c disk.alert "no space left"
#+end_src
... va envoyer le message aussi bien au précédent topic
~${LOGIN}_main~, mais aussi au topic ~elabore_alert~ car il se
termine par ~.alert~.
-
#+begin_src yaml
main:
- maintenance
- debug_foo
- ${LOGIN}_main
#+end_src
La commande:
#+begin_src sh
send -c main "no space left"
#+end_src
.. enverra sur les topics ~maintenance~ et ~debug_foo~ et
~${LOGIN}_main~ (qui est un topic créé pour le VPS par défaut).
*** Ajouter/Enlever des droits d’écriture sur les topics
Sur un poste d'admin (via la commande ~0km~), et après avoir demandé
un accès au serveur ntfy de destination (une clé SSH sera nécessaire),
on pourra utiliser la sous-commande ~0km vps-subscribe~.
#+begin_src sh
## Ajouter
0km vps-subscribe add CHANNEL TOPIC VPS
## Enlever
0km vps-subscribe rm CHANNEL TOPIC VPS
#+end_src
** Troubleshooting ** Troubleshooting
@ -1193,14 +1289,14 @@ compose --debug up
Si cette commande ne fonctionne pas, prendre le temps de bien lire le Si cette commande ne fonctionne pas, prendre le temps de bien lire le
message d'erreur. message d'erreur.
**** Vider les cache de ~/var/cache/compose~
**** Vider les cache de ~compose~
En cas de problème non expliqués et inédits, il est bon de vérifier si En cas de problème non expliqués et inédits, il est bon de vérifier si
l'effacement des caches de compose ne permet pas de corriger le l'effacement des caches de compose ne permet pas de corriger le
problème : problème :
#+begin_src sh #+begin_src sh
rm /var/cache/compose/*
compose --debug cache clear
#+end_src #+end_src
Puis relancer la commande qui ne fonctionne pas (par exemple ~compose Puis relancer la commande qui ne fonctionne pas (par exemple ~compose

297
bin/0km

@ -259,7 +259,7 @@ vps_check() {
fi </dev/null fi </dev/null
compose_content=$(ssh:run "root@$vps" -- cat /opt/apps/myc-deploy/compose.yml </dev/null) || compose_content=$(ssh:run "root@$vps" -- cat /opt/apps/myc-deploy/compose.yml </dev/null) ||
{ echo "${DARKRED}no-compose${NORMAL}"; return 1; } { echo "${DARKRED}no-compose${NORMAL}"; return 1; }
echo "$compose_content" | grep backup >/dev/null 2>&1 ||
echo "$compose_content" | yq -e ".rsync-backup" >/dev/null 2>&1 ||
{ echo "${DARKRED}no-backup${NORMAL}"; return 1; } { echo "${DARKRED}no-backup${NORMAL}"; return 1; }
} }
@ -656,6 +656,156 @@ EOF
} }
NTFY_TOPIC_FILE="/etc/ntfy/topics.yml"
NTFY_CONFIG_FILE="/etc/ntfy/ntfy.conf"
subscribe:ntfy:topic-file-exists() {
local vps="$1"
if ! out=$(echo "[ -f \"$NTFY_TOPIC_FILE\" ] && echo ok || true" | \
ssh:run "root@$vps" -- bash); then
err "Unable to check for existence of '$NTFY_TOPIC_FILE'."
fi
if [ -z "$out" ]; then
err "File '$NTFY_TOPIC_FILE' not found on $vps."
return 1
fi
}
subscribe:ntfy:config-file-exists() {
local vps="$1"
if ! out=$(echo "[ -f \"$NTFY_CONFIG_FILE\" ] && echo ok || true" | \
ssh:run "root@$vps" -- bash); then
err "Unable to check for existence of '$NTFY_CONFIG_FILE'."
fi
if [ -z "$out" ]; then
err "File '$NTFY_CONFIG_FILE' not found on $vps."
return 1
fi
}
ntfy:rm() {
local channel="$1" topic="$2" vps="$3"
subscribe:ntfy:topic-file-exists "$vps" || return 1
if ! out=$(echo "yq -i 'del(.[\"$channel\"][] | select(. == \"$TOPIC\"))' \"$NTFY_TOPIC_FILE\"" | \
ssh:run "root@$vps" -- bash); then
err "Failed to remove channel '$channel' from '$NTFY_TOPIC_FILE'."
return 1
fi
info "Channel '$channel' removed from '$NTFY_TOPIC_FILE' on $vps."
ssh:run "root@$vps" -- cat "$NTFY_TOPIC_FILE"
}
ntfy:add() {
local channel="$1" topic="$2" vps="$3"
vps_connection_check "$vps" </dev/null || return 1
subscribe:ntfy:topic-file-exists "$vps" || return 1
if ! out=$(echo "yq '. | has(\"$channel\")' \"$NTFY_TOPIC_FILE\"" | \
ssh:run "root@$vps" -- bash); then
err "Failed to check if channel '$channel' with topic '$topic' is already in '$NTFY_TOPIC_FILE'."
return 1
fi
if [ "$out" != "true" ]; then
## Channel does not exist
if ! out=$(echo "yq -i '.[\"$channel\"] = []' \"$NTFY_TOPIC_FILE\"" | \
ssh:run "root@$vps" -- bash); then
err "Failed to create a new channel '$channel' entry in '$NTFY_TOPIC_FILE'."
return 1
fi
else
## Channel exists
if ! out=$(echo "yq '.[\"$channel\"] | any_c(. == \"$topic\")' \"$NTFY_TOPIC_FILE\"" | \
ssh:run "root@$vps" -- bash); then
err "Failed to check if channel '$channel' with topic '$topic' is already in '$NTFY_TOPIC_FILE'."
return 1
fi
if [ "$out" == "true" ]; then
info "Channel '$channel' with topic '$topic' already exists in '$NTFY_TOPIC_FILE'."
return 0
fi
fi
if ! out=$(echo "yq -i '.[\"$channel\"] += [\"$topic\"]' \"$NTFY_TOPIC_FILE\"" | \
ssh:run "root@$vps" -- bash); then
err "Failed to add channel '$channel' with topic '$topic' to '$NTFY_TOPIC_FILE'."
return 1
fi
info "Channel '$channel' added with topic '$topic' to '$NTFY_TOPIC_FILE' on $vps."
}
NTFY_BROKER_SERVER="ntfy.0k.io"
ntfy:topic-access() {
local action="$1" topic="$2" vps="$3"
subscribe:ntfy:config-file-exists "$vps" || return 1
local user
user=$(ntfy:get-login "$vps") || return 1
case "$action" in
"write")
ssh "ntfy@$NTFY_BROKER_SERVER" "topic-access" \
"$user" "$topic" "write-only" </dev/null || {
err "Failed to grant write access to '$user' for topic '$topic'."
return 1
}
info "Granted write access for '$user' to topic '$topic'."
;;
"remove")
ssh "ntfy@$NTFY_BROKER_SERVER" "topic-access" -r "$user" "$topic" </dev/null || {
err "Failed to reset access of '$user' for topic '$topic'."
return 1
}
info "Access for '$user' to topic '$topic' was resetted successfully."
;;
*)
err "Invalid action '$action'."
return 1
;;
esac
}
ntfy:get-login() {
local vps="$1"
if ! out=$(echo ". \"$NTFY_CONFIG_FILE\" && echo \"\$LOGIN\"" | \
ssh:run "root@$vps" -- bash); then
err "Failed to get ntfy login from '$NTFY_CONFIG_FILE'."
return 1
fi
if [ -z "$out" ]; then
err "Unexpected empty login retrieved from sourcing '$NTFY_CONFIG_FILE'."
return 1
fi
echo "$out"
}
subscribe:add() {
local vps="$1"
read-0 channel topic || {
err "Couldn't read CHANNEL and TOPIC arguments."
return 1
}
vps_connection_check "$vps" </dev/null || return 1
ntfy:topic-access "write" "$topic" "$vps" </dev/null || return 1
ntfy:add "$channel" "$topic" "$vps" || {
err "Failed to add channel '$channel' with topic '$topic' to '$NTFY_TOPIC_FILE'."
echo " Removing topic access." >&2
ntfy:topic-access "remove" "$topic" "$vps" </dev/null
return 1
}
}
subscribe:rm() {
local vps="$1"
read-0 channel topic || {
err "Couldn't read CHANNEL and TOPIC arguments."
return 1
}
vps_connection_check "$vps" </dev/null || return 1
ntfy:rm "$channel" "$topic" "$vps" || return 1
ntfy:topic-access "remove" "$topic" "$vps" </dev/null || {
err "Failed to remove topic access for '$topic' on '$vps'."
return 1
}
}
vps_backup_recover() { vps_backup_recover() {
local vps="$1" admin server id path rtype force type local vps="$1" admin server id path rtype force type
@ -757,13 +907,25 @@ vps_install_backup() {
vps_connection_check "$vps" </dev/null || return 1 vps_connection_check "$vps" </dev/null || return 1
read-0 admin server read-0 admin server
if ! type=$(ssh:run "root@$vps" -- vps get-type); then
if ! type=$(ssh:run "root@$vps" -- vps get-type </dev/null); then
err "Could not get type." err "Could not get type."
return 1 return 1
fi fi
if ! out=$(ssh:run "root@$vps" -- vps install backup "$server" 2>&1); then
err "Command 'vps install backup $server' on $vps failed:"
backup_opts=()
local opt
while read-0 opt; do
case "$opt" in
--ignore-domain-check|--ignore-ping-check)
backup_opts+=("$opt")
;;
*)
err "Unknown option '$opt'."
return 1
;;
esac
done
if ! out=$(ssh:run "root@$vps" -- vps install backup "$server" "${backup_opts[@]}" 2>&1); then
err "Command 'vps install backup $server ${backup_opts[@]}' on $vps failed:"
echo "$out" | prefix " ${DARKGRAY}|${NORMAL} " >&2 echo "$out" | prefix " ${DARKGRAY}|${NORMAL} " >&2
return 1 return 1
fi fi
@ -791,16 +953,16 @@ vps_install_backup() {
if [ "$type" == "compose" ]; then if [ "$type" == "compose" ]; then
if ! ssh:run "root@$vps" -- \ if ! ssh:run "root@$vps" -- \
docker exec myc_cron_1 \ docker exec myc_cron_1 \
cat /etc/cron.d/rsync-backup >/dev/null 2>&1; then
grep rsync-backup /etc/crontabs/root >/dev/null 2>&1; then
ssh:run "root@$vps" -- compose --debug up || { ssh:run "root@$vps" -- compose --debug up || {
err "Command 'compose --debug up' failed." err "Command 'compose --debug up' failed."
return 1 return 1
} }
if ! ssh:run "root@$vps" -- \ if ! ssh:run "root@$vps" -- \
docker exec myc_cron_1 \ docker exec myc_cron_1 \
cat /etc/cron.d/rsync-backup >/dev/null 2>&1; then
grep rsync-backup /etc/crontabs/root >/dev/null 2>&1; then
err "Launched 'compose up' successfully but ${YELLOW}cron${NORMAL} container is not setup as expected." err "Launched 'compose up' successfully but ${YELLOW}cron${NORMAL} container is not setup as expected."
echo " Was waiting for existence of '/etc/cron.d/rsync-backup' in it." >&2
echo " Was waiting for existence of a line mentionning 'rsync-backup' in '/etc/crontabs/root' in it." >&2
return 1 return 1
fi fi
fi fi
@ -1072,6 +1234,11 @@ cmdline.spec:vps-install:cmd:backup:run() {
: :posarg: BACKUP_TARGET 'Backup target. : :posarg: BACKUP_TARGET 'Backup target.
(ie: myadmin@backup.domain.org:10023/256)' (ie: myadmin@backup.domain.org:10023/256)'
: :optfla: --ignore-domain-check \
"Allow to bypass the domain check in
compose file (only used in compose
installation)."
: :optfla: --ignore-ping-check "Allow to bypass the ping check of host."
: :posarg: [VPS...] 'Target host(s) to check' : :posarg: [VPS...] 'Target host(s) to check'
@ -1088,7 +1255,15 @@ cmdline.spec:vps-install:cmd:backup:run() {
admin=${BACKUP_TARGET%%@*} admin=${BACKUP_TARGET%%@*}
server=${BACKUP_TARGET#*@} server=${BACKUP_TARGET#*@}
p0 "$admin" "$server" |
opts=()
[ -n "$opt_ignore_ping_check" ] &&
opts+=("--ignore-ping-check")
[ -n "$opt_ignore_domain_check" ] &&
opts+=("--ignore-domain-check")
p0 "$admin" "$server" "${opts[@]}" |
vps_mux vps_install_backup "${VPS[@]}" vps_mux vps_install_backup "${VPS[@]}"
} }
@ -1246,7 +1421,7 @@ cmdline.spec::cmd:vps-stats:run() {
opts_rrdfetch+=(-e "$end") opts_rrdfetch+=(-e "$end")
fi fi
fi fi
local resources=(c.memory c.network load_avg)
local resources=(c.memory c.network load_avg disk)
if [ -n "${opt_resource}" ]; then if [ -n "${opt_resource}" ]; then
resources=(${opt_resource//,/ }) resources=(${opt_resource//,/ })
fi fi
@ -1464,7 +1639,7 @@ graph:def:c.memory() {
fi fi
container="${container//\'/}" container="${container//\'/}"
container="${container//@/\\@}" container="${container//@/\\@}"
echo -n " ${rrdfetch_cmd} u 1:((\$3 - \$2)/1000000000) w lines title '${container//_/\\_}'"
echo -n " ${rrdfetch_cmd} u 1:(\$3/(1000*1000*1000)) w lines title '${container//_/\\_}'"
done done
echo echo
} }
@ -1593,5 +1768,105 @@ graph:def:load_avg() {
echo echo
} }
graph:def:disk() {
local vps="$1" i="$2"
shift 2
local opts_rrdfetch=("$@")
rrd_vps_path="$VAR_DIR/rrd/$vps"
[ -f "$rrd_vps_path/$resource.rrd" ] || {
warn "No containers data yet for vps '$vps'... Ignoring"
return 0
}
gnuplot_line_config=(
"set term qt $i title \"$vps $resource\" replotonresize noraise"
"set title '$vps'"
"set xdata time"
"set timefmt '%s'"
"set ylabel '${resource//_/\\_} Usage'"
"set format y '%s'"
"set ytics format '%g GiB'"
"set mouse mouseformat 6"
"set yrange [0:*] "
"set border behind"
)
printf "%s\n" "${gnuplot_line_config[@]}"
first=1
for value in used:2 size:3; do
label="${value%:*}"
col_num="${value#*:}"
rrdfetch_cmd="'< rrdtool fetch \"$rrd_vps_path/$resource.rrd\""
rrdfetch_cmd+=" AVERAGE ${opts_rrdfetch[*]} | \\"$'\n'
rrdfetch_cmd+=" tail -n +2 | \\"$'\n'
rrdfetch_cmd+=" egrep -v \"^$\" | sed -r \"s/ -?nan/ -/g;s/^([0-9]+): /\\1 /g\"'"
rrdfetch_cmd_bash=$(eval echo "${rrdfetch_cmd}")
rrdfetch_cmd_bash=${rrdfetch_cmd_bash#< }
first_ts=
first_ts=$(eval "$rrdfetch_cmd_bash" | head -n 1 | cut -f 1 -d " ")
if [ -z "$first_ts" ]; then
warn "No data for $resource on vps $vps, skipping..."
continue
fi
last_ts=$(eval "$rrdfetch_cmd_bash" | tail -n 1 | cut -f 1 -d " ")
if [[ -z "$data_start_ts" ]] || [[ "$data_start_ts" > "$first_ts" ]]; then
data_start_ts="$first_ts"
fi
if [[ -z "$data_stop_ts" ]] || [[ "$data_stop_ts" < "$last_ts" ]]; then
data_stop_ts="$last_ts"
fi
if [ -n "$first" ]; then
first=
echo "plot \\"
else
echo ", \\"
fi
container="${container//\'/}"
container="${container//@/\\@}"
echo -n " ${rrdfetch_cmd} u 1:(\$${col_num}/(1024*1024)) w lines title '${label}'"
done
echo
}
cmdline.spec.gnu vps-subscribe
cmdline.spec::cmd:vps-subscribe:run() {
:
}
cmdline.spec.gnu add
cmdline.spec:vps-subscribe:cmd:add:run() {
: :posarg: CHANNEL 'Channel which will be sent to given topic'
: :posarg: TOPIC 'Ntfy topic to recieve messages of given channel
(format: "[MYSERVER:]MYTOPICS"
Examples: "ntfy.0k.io:main,storage,alerts",
"main{1,3,7}"
)'
: :posarg: [VPS...] 'Target host(s) to get stats'
printf "%s\0" "$CHANNEL" "$TOPIC" |
vps_mux subscribe:add "${VPS[@]}"
}
cmdline.spec.gnu rm
cmdline.spec:vps-subscribe:cmd:rm:run() {
: :posarg: CHANNEL 'Channel which will be sent to given topic'
: :posarg: TOPIC 'Ntfy topic to recieve messages of given channel
(format: "[MYSERVER:]MYTOPICS"
Examples: "ntfy.0k.io:main,storage,alerts",
"main{1,3,7}"
)'
: :posarg: [VPS...] 'Target host(s) to get stats'
printf "%s\0" "$CHANNEL" "$TOPIC" |
vps_mux subscribe:rm "${VPS[@]}"
}
cmdline::parse "$@" cmdline::parse "$@"

62
bin/myc-update

@ -7,6 +7,37 @@ include common
include pretty include pretty
MIN_DISK_SPACE="${MIN_DISK_SPACE:-300M}"
## convert human size to bytes using numfmt
## Check remaining disk space
if [ -n "$MIN_DISK_SPACE" ]; then
min_disk_space_kbytes=$(numfmt --from=iec --to-unit=1024 "$MIN_DISK_SPACE") || {
err "Invalid format for '\$MIN_DISK_SPACE'."
exit 1
}
if ! remaining_kbytes=$(df / | awk 'NR==2 {print $4}'); then
err "Failed to get remaining disk space."
exit 1
fi
if [ "$remaining_kbytes" -lt "$min_disk_space_kbytes" ]; then
err "Not enough disk space."
human_min_dist_space=$(numfmt --to=iec --format="%.2f" --from-unit=1024 "$min_disk_space_kbytes") || {
err "Failed to convert '\$MIN_DISK_SPACE' to human readable format."
exit 1
}
human_remaining_kbytes=$(numfmt --to=iec --format="%.2f" --from-unit=1024 "$remaining_kbytes") || {
err "Failed to convert '\$remaining_kbytes' to human readable format."
exit 1
}
echo " - At least $human_min_dist_space are required." >&2
echo " - Only $human_remaining_kbytes are available." >&2
exit 1
fi
fi
start=$SECONDS start=$SECONDS
if [ -z "$NO_UPDATE" -a -d "/opt/apps/myc-manage" ]; then if [ -z "$NO_UPDATE" -a -d "/opt/apps/myc-manage" ]; then
@ -61,10 +92,37 @@ docker pull docker.0k.io/letsencrypt
EOF EOF
Wrap -d "Updating cron scripts" <<EOF || exit 1 Wrap -d "Updating cron scripts" <<EOF || exit 1
ln -sfn /opt/apps/myc-manage/etc/cron.d/* /etc/cron.d/
find -L /etc/cron.d -maxdepth 1 -type l -ilname /opt/apps/myc-manage/etc/cron.d/\* -delete
for d in /etc/cron.{d,daily,hourly,monthly,weekly}; do
ln -sfn "/opt/apps/myc-manage\$d/"* "\$d/" &&
find -L "\$d" -maxdepth 1 -type l -ilname "/opt/apps/myc-manage\$d/"\* -delete
done
EOF
Wrap -d "Updating sysctl scripts" <<EOF || exit 1
for d in /etc/sysctl.d; do
ln -sfn "/opt/apps/myc-manage\$d/"* "\$d/" &&
find -L "\$d" -maxdepth 1 -type l -ilname "/opt/apps/myc-manage\$d/"\* -delete
done
EOF EOF
if [ -f "/root/.bashrc" ]; then
Wrap -d "Enable colors in bash" <<'EOF' || exit 1
sed -ri 's/^# (export LS_OPTIONS=.--color=auto.)/\1/;
s/^# (eval "`dircolors`")/\1/;
s/^# (alias ls='"'ls \\\$LS_OPTIONS'"')/\1/' /root/.bashrc
EOF
fi
## add option to limit log size
journalctl_config_file="/etc/systemd/journald.conf"
if [ -f "$journalctl_config_file" ] &&
! grep -q "^SystemMaxUse=" "$journalctl_config_file"; then
Wrap -d "Limit system journal logs to 200M" <<EOF || exit 1
sed -ri 's/^#SystemMaxUse=$/SystemMaxUse=200M/g' "$journalctl_config_file"
systemctl restart systemd-journald.service
EOF
fi
for keyfile in {/root,/home/debian}/.ssh/authorized_keys; do for keyfile in {/root,/home/debian}/.ssh/authorized_keys; do
[ -e "$keyfile" ] || continue [ -e "$keyfile" ] || continue
sed -ri 's%^ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDri3GHzDt0Il0jv6zLjwkge48dN9tv11sqVNnKoDeUxzk4kn7Ng5ldd3p6dYL6Pa5NDqJUAhO/d/q08IWuwfEbtj8Yc/EkahcRwVD2imPceUeDgyCaOJhq7WO4c9d9yG8PnRO2\+Zk92a9L5vuELVLr4UHIQOs2/eFRY2/ODV8ebf5L1issGzfLd/IPhX5oJwMwKfqIFOP7KPQ26duHNRq4bYOD9ePW4shfxmyQDk6dSImFat05ErT\+X7703PcPx/PX2AIqqz95zqM6M26BywAohuaD5joxKgkd/mMIJylvT8GEYDlcLMHwnM7LtwtyJ1O9dkVpsibIqGy20KlAOGPf admin@0k$%ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMV3USt/BLnXnUk7rk8v42mISZaXBZuULbh2vx2Amk7k admin@old0kreplacement%g' "$keyfile" sed -ri 's%^ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDri3GHzDt0Il0jv6zLjwkge48dN9tv11sqVNnKoDeUxzk4kn7Ng5ldd3p6dYL6Pa5NDqJUAhO/d/q08IWuwfEbtj8Yc/EkahcRwVD2imPceUeDgyCaOJhq7WO4c9d9yG8PnRO2\+Zk92a9L5vuELVLr4UHIQOs2/eFRY2/ODV8ebf5L1issGzfLd/IPhX5oJwMwKfqIFOP7KPQ26duHNRq4bYOD9ePW4shfxmyQDk6dSImFat05ErT\+X7703PcPx/PX2AIqqz95zqM6M26BywAohuaD5joxKgkd/mMIJylvT8GEYDlcLMHwnM7LtwtyJ1O9dkVpsibIqGy20KlAOGPf admin@0k$%ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMV3USt/BLnXnUk7rk8v42mISZaXBZuULbh2vx2Amk7k admin@old0kreplacement%g' "$keyfile"

107
bin/vps

@ -436,11 +436,11 @@ compose:install-backup() {
ping_check "$host" || return 1 ping_check "$host" || return 1
if [ -e "/root/.ssh/rsync_rsa" ]; then if [ -e "/root/.ssh/rsync_rsa" ]; then
warn "deleting private key in /root/.ssh/rsync_rsa, has we are not using it anymore."
warn "deleting private key in /root/.ssh/rsync_rsa, as we are not using it anymore."
rm -fv /root/.ssh/rsync_rsa rm -fv /root/.ssh/rsync_rsa
fi fi
if [ -e "/root/.ssh/rsync_rsa.pub" ]; then if [ -e "/root/.ssh/rsync_rsa.pub" ]; then
warn "deleting public key in /root/.ssh/rsync_rsa.pub, has we are not using it anymore."
warn "deleting public key in /root/.ssh/rsync_rsa.pub, as we are not using it anymore."
rm -fv /root/.ssh/rsync_rsa.pub rm -fv /root/.ssh/rsync_rsa.pub
fi fi
@ -887,8 +887,44 @@ export -f cyclos:unlock
rocketchat:drop-indexes() { rocketchat:drop-indexes() {
local project_name="$1" dbname="$2" local project_name="$1" dbname="$2"
echo "db.users.dropIndexes()" |
compose:mongo "${project_name}" "${dbname}"
compose:mongo "${project_name}" "${dbname}" <<'EOF'
db.users.dropIndexes();
// Check if the 'rocketchat_uploads' collection exists
var collections = db.getCollectionNames();
if (collections.indexOf('rocketchat_uploads') !== -1) {
db.rocketchat_uploads.dropIndexes();
}
if (collections.indexOf('rocketchat_read_receipts') !== -1) {
db.rocketchat_read_receipts.dropIndexes();
var duplicates = [];
db.getCollection("rocketchat_read_receipts").aggregate([
{
"$group": {
"_id": { "roomId": "$roomId", "userId": "$userId", "messageId": "$messageId" },
"uniqueIds": { "$addToSet": "$_id" },
"count": { "$sum": 1 }
}
},
{ "$match": { "count": { "$gt": 1 } } }
],
{ allowDiskUse: true }
).forEach(function (doc) {
// remove 1st element
doc.uniqueIds.shift();
doc.uniqueIds.forEach(function (dupId) {
duplicates.push(dupId);
}
)
})
// printjson(duplicates);
db.getCollection("rocketchat_read_receipts").remove({ _id: { $in: duplicates } });
}
EOF
} }
export -f rocketchat:drop-indexes export -f rocketchat:drop-indexes
@ -916,9 +952,25 @@ compose:get_cron_docker_cmd() {
local cron_line cmd_line docker_cmd local cron_line cmd_line docker_cmd
project_name=$(compose:project_name) || return 1 project_name=$(compose:project_name) || return 1
container=$(compose:service:containers "${project_name}" "cron") || {
err "Can't find service 'cron' in project ${project_name}."
return 1
}
if docker exec "$container" test -e /etc/cron.d/rsync-backup; then
if ! cron_line=$(docker exec "${project_name}"_cron_1 cat /etc/cron.d/rsync-backup | grep "\* \* \*"); then if ! cron_line=$(docker exec "${project_name}"_cron_1 cat /etc/cron.d/rsync-backup | grep "\* \* \*"); then
err "Can't find cron_line in cron container."
echo " Have you forgotten to run 'compose up' ?" >&2
err "Can't find cron line in legacy cron container."
return 1
fi
elif docker exec "$container" test -e /etc/crontabs/root; then
if ! cron_line=$(docker exec "$container" cat /etc/crontabs/root | grep " launch-rsync-backup " | grep "\* \* \*"); then
err "Can't find cron line in cron container."
return 1
fi
else
err "Unrecognized cron container:"
echo " Can't find neither:" >&2
echo " - /etc/cron.d/rsync-backup for old-style cron services" >&2
echo " - nor /etc/crontabs/root for new-style cron services." >&2
return 1 return 1
fi fi
@ -1065,7 +1117,7 @@ EOF
container:health:check-fix:no-matching-entries() { container:health:check-fix:no-matching-entries() {
local container_id="$1" local container_id="$1"
out=$(docker exec "$container_id" echo 2>&1)
out=$(docker exec -u root "$container_id" echo 2>&1)
errlvl=$? errlvl=$?
[ "$errlvl" == 0 ] && return 0 [ "$errlvl" == 0 ] && return 0
service_name=$(docker ps --filter id="$container_id" --format '{{.Label "com.docker.compose.service"}}') service_name=$(docker ps --filter id="$container_id" --format '{{.Label "com.docker.compose.service"}}')
@ -1077,13 +1129,13 @@ docker restart "$container_id"
sleep 2 sleep 2
docker restart "$container_id" docker restart "$container_id"
EOF EOF
return $errlvl
return 2
fi fi
warn "Unknown issue with ${DARKYELLOW}$service_name${NORMAL}'s container:" warn "Unknown issue with ${DARKYELLOW}$service_name${NORMAL}'s container:"
echo " ${WHITE}cmd:${NORMAL} docker exec -ti $container_id echo" >&2 echo " ${WHITE}cmd:${NORMAL} docker exec -ti $container_id echo" >&2
echo "$out" | prefix " ${DARKGRAY}|${NORMAL} " >&2 echo "$out" | prefix " ${DARKGRAY}|${NORMAL} " >&2
echo " ${DARKGRAY}..${NORMAL} leaving this as-is." echo " ${DARKGRAY}..${NORMAL} leaving this as-is."
return $errlvl
return 1
} }
docker:api() { docker:api() {
@ -1650,7 +1702,18 @@ cmdline.spec:odoo:cmd:restore:run() {
opts_load=() opts_load=()
[ "$opt_neutralize" ] && opts_load+=("--neutralize") [ "$opt_neutralize" ] && opts_load+=("--neutralize")
#cmdline.spec:odoo:cmd:restart:run --service "$odoo_service" || exit 1
project_name=$(compose:project_name) || exit 1
container:health:check-fix:no-matching-entries "${project_name}_${odoo_service}_1"
case "$?" in
0)
debug "Container ${project_name}_${odoo_service}_1 is healthy."
;;
1) err "Container ${project_name}_${odoo_service}_1 is not healthy."
exit 1
;;
2) info "Container ${project_name}_${odoo_service}_1 was fixed."
;;
esac
msg_dbname=default msg_dbname=default
[ -n "$opt_database" ] && msg_dbname="'$opt_database'" [ -n "$opt_database" ] && msg_dbname="'$opt_database'"
@ -2142,7 +2205,7 @@ cmdline.spec::cmd:stats:run() {
return 1 return 1
esac esac
local resources=(c.{memory,network} load_avg)
local resources=(c.{memory,network} load_avg disk)
if [ -n "${opt_resource}" ]; then if [ -n "${opt_resource}" ]; then
resources=(${opt_resource//,/ }) resources=(${opt_resource//,/ })
fi fi
@ -2292,6 +2355,28 @@ stats:load_avg() {
esac esac
} }
stats:disk() {
local format="$1"
local out
disk_used_size=$(df --output=used,size / | tail -n 1) || return 1
out=$(printf "%s " "" "$(date +%s)" "${disk_used_size// / }")
printf "%s\n" "$out" | rrd:update "" "disk|2:used:GAUGE:U:U,3:size:GAUGE:U:U" || {
return 1
}
case "${format:-p}" in
raw|r)
printf "%s\n" "$out" | cut -f 2-4 -d " "
;;
pretty|p)
{
echo "__used" "__size"
printf "%s\n" "$out" | cut -f 3-5 -d " " |
numfmt --field 1-2 --from-unit=1024 --to=iec-i --format=%8.1fB
} | col:normalize:size ++ | header:make
;;
esac
}
host:sys:load_avg() { host:sys:load_avg() {
local uptime local uptime

1
etc/sysctl.d/90-inotify_watches

@ -0,0 +1 @@
fs.inotify.max_user_watches = 524288
Loading…
Cancel
Save