Compare commits
merge into: 0k:master
0k:0k/dev/master
0k:backup
0k:bgallet/mattermost
0k:bgallet/nextcloud
0k:boris/smtp-extern
0k:charm-codimd-new
0k:cups_service_alpha
0k:dev
0k:dev1
0k:dhcp
0k:element
0k:etherpad-upd
0k:framadate
0k:get-version
0k:lokavaluto/dev/master
0k:master
0k:matomo
0k:new-mailhog-charms
0k:new-monujo-options
0k:nj-collabra-office
0k:nj-keycloak-17.0
0k:nj-organice-charm
0k:nj-vaulwarden-migrate
0k:ntfy-install
0k:odoo_fix_webhook_url
0k:postgres
0k:test
0k:upd-docker
0k:update-latest-synapse
0k:wip
bgallet:0k/dev/master
bgallet:backup
bgallet:bgallet/mattermost
bgallet:boris/docuseal
bgallet:boris/matomo
bgallet:boris/rallly
bgallet:boris/smtp-extern
bgallet:charm-codimd-new
bgallet:cups_service_alpha
bgallet:dev
bgallet:dev1
bgallet:dhcp
bgallet:discourse
bgallet:element
bgallet:etherpad-upd
bgallet:framadate
bgallet:hedgedoc
bgallet:lokavaluto/dev/master
bgallet:master
bgallet:matomo
bgallet:nanoyaml
bgallet:netdata
bgallet:new-mailhog-charms
bgallet:new-monujo-options
bgallet:nextcloud
bgallet:nj-collabra-office
bgallet:nj-keycloak-17.0
bgallet:nj-organice-charm
bgallet:nj-vaulwarden-migrate
bgallet:odoo_fix_webhook_url
bgallet:postgres
bgallet:rallly
bgallet:test
bgallet:upd
bgallet:upd-docker
bgallet:update-latest-synapse
bgallet:wip
pull from: bgallet:master
bgallet:0k/dev/master
bgallet:backup
bgallet:bgallet/mattermost
bgallet:boris/docuseal
bgallet:boris/matomo
bgallet:boris/rallly
bgallet:boris/smtp-extern
bgallet:charm-codimd-new
bgallet:cups_service_alpha
bgallet:dev
bgallet:dev1
bgallet:dhcp
bgallet:discourse
bgallet:element
bgallet:etherpad-upd
bgallet:framadate
bgallet:hedgedoc
bgallet:lokavaluto/dev/master
bgallet:master
bgallet:matomo
bgallet:nanoyaml
bgallet:netdata
bgallet:new-mailhog-charms
bgallet:new-monujo-options
bgallet:nextcloud
bgallet:nj-collabra-office
bgallet:nj-keycloak-17.0
bgallet:nj-organice-charm
bgallet:nj-vaulwarden-migrate
bgallet:odoo_fix_webhook_url
bgallet:postgres
bgallet:rallly
bgallet:test
bgallet:upd
bgallet:upd-docker
bgallet:update-latest-synapse
bgallet:wip
0k:0k/dev/master
0k:backup
0k:bgallet/mattermost
0k:bgallet/nextcloud
0k:boris/smtp-extern
0k:charm-codimd-new
0k:cups_service_alpha
0k:dev
0k:dev1
0k:dhcp
0k:element
0k:etherpad-upd
0k:framadate
0k:get-version
0k:lokavaluto/dev/master
0k:master
0k:matomo
0k:new-mailhog-charms
0k:new-monujo-options
0k:nj-collabra-office
0k:nj-keycloak-17.0
0k:nj-organice-charm
0k:nj-vaulwarden-migrate
0k:ntfy-install
0k:odoo_fix_webhook_url
0k:postgres
0k:test
0k:upd-docker
0k:update-latest-synapse
0k:wip
29 Commits
73 changed files with 2484 additions and 1000 deletions
-
138README.org
-
188apache/README.org
-
29apache/README.rst
-
4apache/hooks/publish_dir-relation-joined
-
4apache/hooks/web_proxy-relation-joined
-
317apache/lib/common
-
316apache/test/get_domains
-
17apache/test/vhost
-
2apache/test/vhost_cert_provider
-
14apache/test/vhost_files
-
4collabora/hooks/web_proxy-relation-joined
-
4collabora/metadata.yml
-
19cron/README.org
-
32cron/build/Dockerfile
-
16cron/build/README
-
32cron/build/entrypoint.sh
-
2cron/build/src/usr/bin/README
-
BINcron/build/src/usr/bin/docker-1.9.1
-
BINcron/build/src/usr/bin/docker-17.06.2-ce
-
363cron/build/src/usr/bin/lock
-
3cron/hooks/init
-
45cron/hooks/pre_deploy
-
210cron/lib/common
-
9cron/metadata.yml
-
155cron/test/entries_from_service
-
151cron/test/get_config
-
90cron/test/lock_opts
-
2cyclos/lib/common
-
100docker-host/hooks/install.d/90-ntfy.sh
-
166docker-host/src/bin/send
-
BINdocker-host/src/etc/ssh/ntfy-key
-
44gogocarto/hooks/schedule_commands-relation-joined
-
2gogocarto/lib/common
-
14gogocarto/metadata.yml
-
22hedgedoc/README.org
-
1letsencrypt/actions/crt
-
34letsencrypt/hooks/schedule_command-relation-joined
-
8letsencrypt/lib/common
-
4letsencrypt/metadata.yml
-
30logrotate/hooks/schedule_command-relation-joined
-
5logrotate/metadata.yml
-
62mariadb/hooks/schedule_command-relation-joined
-
15mariadb/hooks/sql_database-relation-joined
-
1mariadb/metadata.yml
-
2mongo/actions/relations/mongo-database/mongosh
-
67mongo/hooks/schedule_command-relation-joined
-
2mongo/metadata.yml
-
1nextcloud/actions/occ
-
2nextcloud/actions/upgrade
-
7nextcloud/hooks/init
-
51nextcloud/hooks/mysql_database-relation-joined
-
1nextcloud/hooks/mysql_database-relation-joined
-
52nextcloud/hooks/postgres_database-relation-joined
-
75nextcloud/hooks/sql_database-relation-joined
-
31nextcloud/hooks/web_proxy-relation-joined
-
44nextcloud/lib/common
-
11nextcloud/metadata.yml
-
40odoo-tecnativa/README.org
-
8odoo-tecnativa/README.rst
-
25odoo-tecnativa/actions/load
-
39odoo-tecnativa/hooks/init
-
2piwigo/hooks/post_deploy
-
77postgres/hooks/schedule_command-relation-joined
-
16postgres/hooks/sql_database-relation-joined
-
1postgres/metadata.yml
-
2rallly/hooks/init
-
2rallly/metadata.yml
-
3rocketchat/README.org
-
3rsync-backup-target/build/Dockerfile
-
9rsync-backup-target/build/src/usr/local/sbin/ssh-admin-cmd-validate
-
147rsync-backup-target/build/src/usr/local/sbin/ssh-key
-
5rsync-backup-target/hooks/init
-
69rsync-backup/hooks/schedule_command-relation-joined
-
2sftp/lib/common
@ -0,0 +1,188 @@ |
|||||
|
|
||||
|
|
||||
|
* Usage |
||||
|
|
||||
|
Other services will often require a service managed with this charm to |
||||
|
act as a HTTP/HTTPS front-end. It can provide certificates with HTTPS. |
||||
|
|
||||
|
|
||||
|
** Domain assignment |
||||
|
|
||||
|
Services using relation =web-proxy= or =publish-dir= will be required |
||||
|
to be assigned a domain name for the virtual host that will be |
||||
|
created. |
||||
|
|
||||
|
*** Domain sources |
||||
|
|
||||
|
This domain name can be set (in order of priority), the first source |
||||
|
giving a name will be taken. |
||||
|
|
||||
|
- *Relation's options* (=web-proxy= or =publish-dir=) |
||||
|
Using =domain= option, and optionally the deprecated |
||||
|
=server-aliases= for additional names. |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
myservice: |
||||
|
# ... |
||||
|
relations: |
||||
|
web-proxy: |
||||
|
apache: |
||||
|
domain: mydomain.org |
||||
|
#server-aliases: |
||||
|
# - www.mydomain.org |
||||
|
# - pro.mydomain.org |
||||
|
#+end_src |
||||
|
- *Apache service's options*, using a =service-domain-name= mapping: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
myservice: |
||||
|
# ... |
||||
|
apache: |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
# ... |
||||
|
myservice: |
||||
|
- mydomain.org |
||||
|
- www.mydomain.org |
||||
|
- pro.mydomain.org |
||||
|
# ... |
||||
|
#+end_src |
||||
|
|
||||
|
- *the service name* itself if is a domain name: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
www.mydomain.org: |
||||
|
# ... |
||||
|
#+end_src |
||||
|
|
||||
|
Please note that this is not recommended, and will be deprecated. |
||||
|
|
||||
|
*** Domain and alternate domains |
||||
|
|
||||
|
Every source (except the one coming out from the domain name), can use |
||||
|
several ways to provide *more than one domain name*. |
||||
|
|
||||
|
Please remember: |
||||
|
- At least one domain name needs to be provided |
||||
|
- and the first domain can't use wildcards and will be considered the main domain name. |
||||
|
|
||||
|
If other domains are specified, they will be used as aliases, and |
||||
|
wildcard (using ~*~) is supported. |
||||
|
|
||||
|
Additionally, bash braces expansion and regex matching are |
||||
|
available. Space separated YAML string or YAML sequences are |
||||
|
supported, also as mix of both. |
||||
|
|
||||
|
As examples, notice the following are equivalent and will serve |
||||
|
=myservice= on the exact same set of domain names: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
myservice: |
||||
|
relations: |
||||
|
web-proxy: |
||||
|
domain: |
||||
|
## A yaml list |
||||
|
- myservice.home.org |
||||
|
- mydomain.org |
||||
|
- www.mydomain.org |
||||
|
- pro.mydomain.org |
||||
|
- *.myservice.hop.org |
||||
|
#+end_src |
||||
|
|
||||
|
|
||||
|
#+begin_src yaml |
||||
|
myservice: |
||||
|
# ... no domain set in relation |
||||
|
apache: |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
## A yaml list as a mapping value |
||||
|
myservice: |
||||
|
- myservice.home.org |
||||
|
- {,www.,pro.}mydomain.org ## bash braces expansion used |
||||
|
- *.myservice.hop.org |
||||
|
#+end_src |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
myservice: |
||||
|
# ... |
||||
|
apache: |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
## space separated YAML string and bash braces expansion |
||||
|
myservice: myservice.home.org {,www.,pro.}mydomain.org *.myservice.hop.org |
||||
|
#+end_src |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
myservice: |
||||
|
# ... |
||||
|
apache: |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
## Leveraging bash braces expansion and regex replacement |
||||
|
.*: {$0.home,{,www.,pro.}mydomain,*.$0.hop}.org |
||||
|
#+end_src |
||||
|
|
||||
|
** Domain mapping |
||||
|
|
||||
|
You can automatically assign a domain to services in relation |
||||
|
=web-proxy= or =publish-dir= with services managed by this charm using |
||||
|
the =service-domain-name= option. For instance: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
apache: |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
.*: $0.mydomain.org |
||||
|
#+end_src |
||||
|
|
||||
|
Where ~mydomain.org~ stands for the domain where most of your services |
||||
|
will be served. You can override this behavior for some services: |
||||
|
- by adding a matching rule *before* the given rule. |
||||
|
- by specifying a =domain= in the relation's options. |
||||
|
|
||||
|
first rule matching will end the mapping: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
apache: |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
foo: www.mydomain.org |
||||
|
bar: beta.myotherdomain.com |
||||
|
#+end_src |
||||
|
|
||||
|
Allows to distribute services to domains quite freely. |
||||
|
|
||||
|
|
||||
|
* SSH Tunnel |
||||
|
|
||||
|
On the server side, you can configure your compose file:: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
apache: |
||||
|
options: |
||||
|
ssh-tunnel: |
||||
|
domain: ssh.domain.com ## required |
||||
|
#ssl: ... ## required, but automatically setup if you |
||||
|
## provide a ``cert-provider`` to ``apache``. |
||||
|
#+end_src |
||||
|
|
||||
|
|
||||
|
On the client side you should add this to your ``~/.ssh/config``:: |
||||
|
|
||||
|
#+begin_src conf-space |
||||
|
Host ssh.domain.com |
||||
|
Port 443 |
||||
|
ProxyCommand proxytunnel -q -E -p ssh.domain.com:443 -d ssh.domain.com:22 |
||||
|
DynamicForward 1080 |
||||
|
ServerAliveInterval 60 |
||||
|
#+end_src |
||||
|
|
||||
|
If it doesn't work, you can do some checks thanks to this command:: |
||||
|
|
||||
|
#+begin_example |
||||
|
$ proxytunnel -E -p ssh.domain.com:443 -d ssh.domain.com:22 -v \ |
||||
|
-H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)\n" |
||||
|
#+end_example |
||||
|
|
||||
|
|
@ -1,29 +0,0 @@ |
|||||
|
|
||||
|
|
||||
SSH Tunnel |
|
||||
---------- |
|
||||
|
|
||||
On the server side, you can configure your compose file:: |
|
||||
|
|
||||
apache: |
|
||||
options: |
|
||||
ssh-tunnel: |
|
||||
domain: ssh.domain.com ## required |
|
||||
#ssh: ... ## required, but automatically setup if you |
|
||||
## provide a ``cert-provider`` to ``apache``. |
|
||||
|
|
||||
|
|
||||
On the client side you should add this to your ``~/.ssh/config``:: |
|
||||
|
|
||||
Host ssh.domain.com |
|
||||
Port 443 |
|
||||
ProxyCommand proxytunnel -q -E -p ssh.domain.com:443 -d ssh.domain.com:22 |
|
||||
DynamicForward 1080 |
|
||||
ServerAliveInterval 60 |
|
||||
|
|
||||
If it doesn't work, you can do some checks thanks to this command:: |
|
||||
|
|
||||
$ proxytunnel -E -p ssh.domain.com:443 -d ssh.domain.com:22 -v \ |
|
||||
-H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)\n" |
|
||||
|
|
||||
|
|
@ -0,0 +1,316 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
exname=$(basename $0) |
||||
|
|
||||
|
compose_core=$(which compose-core) || { |
||||
|
echo "Requires compose-core executable to be in \$PATH." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
fetch-def() { |
||||
|
local path="$1" fname="$2" |
||||
|
( . "$path" 1>&2 || { |
||||
|
echo "Failed to load '$path'." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
declare -f "$fname" |
||||
|
) |
||||
|
} |
||||
|
|
||||
|
prefix_cmd=" |
||||
|
. /etc/shlib |
||||
|
|
||||
|
include common |
||||
|
include parse |
||||
|
|
||||
|
. ../lib/common |
||||
|
|
||||
|
$(fetch-def "$compose_core" yaml_get_values) |
||||
|
$(fetch-def "$compose_core" yaml_get_interpret) |
||||
|
|
||||
|
" || { |
||||
|
echo "Couldn't build prefix cmd" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
# mock |
||||
|
cfg-get-value() { |
||||
|
local key="$1" |
||||
|
shyaml get-value "$key" 2>/dev/null |
||||
|
} |
||||
|
export -f cfg-get-value |
||||
|
|
||||
|
yaml_get_interpret() { |
||||
|
shyaml get-value |
||||
|
} |
||||
|
export -f yaml_get_interpret |
||||
|
|
||||
|
|
||||
|
export state_tmpdir=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
trap "rm -rf \"$state_tmpdir\"" EXIT |
||||
|
|
||||
|
## |
||||
|
## Tests |
||||
|
## |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
'" |
||||
|
is errlvl 1 |
||||
|
is err reg 'Error: .*domain option.*' |
||||
|
is out '' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: toto |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'toto |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: toto titi |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'toto titi |
||||
|
' |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
- toto |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'toto |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
server-aliases: |
||||
|
'" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part 'No domain name set' |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
server-aliases: |
||||
|
'" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part 'No domain name set' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
server-aliases: |
||||
|
- toto |
||||
|
'" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part "You can't specify server aliases if you don't have a domain" |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: foo |
||||
|
server-aliases: |
||||
|
- bar |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'foo bar |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: foo |
||||
|
server-aliases: bar |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'foo bar |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
- foo |
||||
|
server-aliases: bar |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'foo bar |
||||
|
' |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
- foo{1,2} bar |
||||
|
server-aliases: wiz |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'foo1 foo2 bar wiz |
||||
|
' |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
- foo{1,2} bar |
||||
|
server-aliases: foo1 |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'foo1 foo2 bar |
||||
|
' |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
- foo{1,2} bar |
||||
|
- \"*.zoo\" |
||||
|
server-aliases: foo1 |
||||
|
'" |
||||
|
noerror |
||||
|
is out 'foo1 foo2 bar *.zoo |
||||
|
' |
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: foo+ bar |
||||
|
'" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part 'Invalid domain value' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options.service-domain-map: |
||||
|
'" "empty service-domain-map" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part 'No domain name set' |
||||
|
is err part 'service-domain-map' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
BASE_SERVICE_NAME=foo |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
wiz: bar |
||||
|
'" "no map matching in service-domain-map" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part 'No domain name set' |
||||
|
is err part 'service-domain-map' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
wiz: bar |
||||
|
'" "matching map in service-domain-map" |
||||
|
noerror |
||||
|
is out 'bar |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
wiz?: bar |
||||
|
wiz: bar2 |
||||
|
'" "only first matching map in service-domain-map" |
||||
|
noerror |
||||
|
is out 'bar |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
\"[w]i?zz?\": bar |
||||
|
'" "map are regex in service-domain-map" |
||||
|
noerror |
||||
|
is out 'bar |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
(w)i(z): bar\$1\$2 |
||||
|
'" "regex capture in service-domain-map" |
||||
|
noerror |
||||
|
is out 'barwz |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
.*: \$0.shrubbery |
||||
|
'" "regex capture 2 in service-domain-map" |
||||
|
noerror |
||||
|
is out 'wiz.shrubbery |
||||
|
' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
.*: \$x |
||||
|
'" "refuse other variables in service-domain-map" |
||||
|
is errlvl 1 |
||||
|
is err part 'Error: ' |
||||
|
is err part 'Invalid mapping value' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export BASE_SERVICE_NAME=wiz |
||||
|
get_domains ' |
||||
|
domain: |
||||
|
' ' |
||||
|
options: |
||||
|
service-domain-map: |
||||
|
.*: |
||||
|
- \$0.example.com |
||||
|
- my-\$0.domain.org |
||||
|
|
||||
|
'" "list is possible as value of service-domain-map" |
||||
|
noerror |
||||
|
is out 'wiz.example.com my-wiz.domain.org |
||||
|
' |
@ -0,0 +1,19 @@ |
|||||
|
# -*- ispell-local-dictionary: "english" -*- |
||||
|
|
||||
|
* Usage |
||||
|
|
||||
|
By adding =cron= as a service, all other services in auto pair mode, |
||||
|
requiring a =schedule-command= will use it. |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
cron: |
||||
|
#+end_src |
||||
|
|
||||
|
There are no options to set. |
||||
|
|
||||
|
** =schedule-command= relation |
||||
|
|
||||
|
If most other services will have default options and set these values |
||||
|
automatically. You probably don't need to configure anything in the |
||||
|
relation's options if defaults suits you. |
||||
|
|
@ -1,13 +1,27 @@ |
|||||
FROM docker.0k.io/debian:jessie |
|
||||
|
FROM alpine:3.18 AS build |
||||
|
|
||||
RUN apt-get update && \ |
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y cron moreutils && \ |
|
||||
apt-get clean && \ |
|
||||
rm -rf /var/lib/apt/lists/* |
|
||||
|
ENV DOCKER_VERSION=25.0.2 |
||||
|
|
||||
COPY ./src/usr/bin/lock /usr/bin/lock |
|
||||
COPY ./src/usr/bin/docker-17.06.2-ce /usr/bin/docker |
|
||||
|
RUN apk add --no-cache curl |
||||
|
|
||||
COPY ./entrypoint.sh /entrypoint.sh |
|
||||
|
#RUN curl -L https://download.docker.com/linux/static/stable/x86_64/docker-"$DOCKER_VERSION".tgz | \ |
||||
|
RUN curl -L https://docker.0k.io/downloads/docker-"$DOCKER_VERSION".tgz | \ |
||||
|
tar -xz -C /tmp/ \ |
||||
|
&& mv /tmp/docker/docker /usr/bin/docker |
||||
|
RUN curl -L https://docker.0k.io/downloads/lock-40a4b8f > /usr/bin/lock \ |
||||
|
&& chmod +x /usr/bin/lock |
||||
|
|
||||
ENTRYPOINT [ "/entrypoint.sh" ] |
|
||||
|
FROM alpine:3.18 |
||||
|
|
||||
|
## Used by `lock` |
||||
|
RUN apk add --no-cache bash |
||||
|
|
||||
|
## /usr/bin/dc is a calculator provided by busybox that conflicts with |
||||
|
## the `dc` command provided by `compose`. We have no need of busybox |
||||
|
## calculator |
||||
|
RUN rm /usr/bin/dc |
||||
|
|
||||
|
COPY --from=build /usr/bin/docker /usr/bin/docker |
||||
|
COPY --from=build /usr/bin/lock /usr/bin/lock |
||||
|
|
||||
|
ENTRYPOINT [ "crond", "-f", "-l", "0" ] |
@ -1,16 +0,0 @@ |
|||||
|
|
||||
|
|
||||
Warning, this charm will require access to ``/var/run/docker.sock``, |
|
||||
and this IS EQUIVALENT to root access to host. |
|
||||
|
|
||||
Warning, must use ``/etc/cron`` and not ``/etc/cron.d``. |
|
||||
|
|
||||
|
|
||||
docker was downloaded with: |
|
||||
|
|
||||
wget https://get.docker.com/builds/Linux/x86_64/docker-1.9.1 |
|
||||
|
|
||||
|
|
||||
It changed, check: |
|
||||
|
|
||||
https://download.docker.com/linux/static/stable/x86_64/ |
|
@ -1,32 +0,0 @@ |
|||||
#!/bin/bash |
|
||||
|
|
||||
## |
|
||||
## /var/log might be plugged into an empty volume for saving logs, so we |
|
||||
## must make sure that /var/log/exim4 exists and has correct permissions. |
|
||||
|
|
||||
mkdir -p /var/log/exim4 |
|
||||
chmod -R u+rw /var/log/exim4 |
|
||||
chown -R Debian-exim /var/log/exim4 |
|
||||
|
|
||||
|
|
||||
echo "Propagating docker shell environment variables to CRON scripts." |
|
||||
|
|
||||
rm -f /etc/cron.d/* |
|
||||
cp -a /etc/cron/* /etc/cron.d/ |
|
||||
|
|
||||
for f in /etc/crontab /etc/cron.d/*; do |
|
||||
[ -e "$f" ] || continue |
|
||||
mv "$f" /tmp/tempfile |
|
||||
{ |
|
||||
declare -xp | egrep '_PORT_[0-9]+_' | sed -r 's/^declare -x //g' |
|
||||
echo "TZ=$TZ" |
|
||||
echo |
|
||||
cat /tmp/tempfile |
|
||||
} > "$f" |
|
||||
rm /tmp/tempfile |
|
||||
done |
|
||||
|
|
||||
|
|
||||
echo "Launching cron." |
|
||||
## Give back PID 1, so that cron process receives signals. |
|
||||
exec /usr/sbin/cron -f |
|
@ -1,2 +0,0 @@ |
|||||
WARNING, lock shell script is a copy from ``kal-scripts``. Please |
|
||||
do not do any modification to it without sending it back to ``kal-scripts``. |
|
@ -1,363 +0,0 @@ |
|||||
#!/bin/bash |
|
||||
|
|
||||
## |
|
||||
## TODO |
|
||||
## - don't sleep 1 but wait in flock for 1 second |
|
||||
## - every waiting proc should write at least their PID and priority, |
|
||||
## to leave alive PID with higher priority the precedence. (and probably |
|
||||
## a check to the last probing time, and invalidate it if it is higher than 10s |
|
||||
## for example.) |
|
||||
## - could add the time they waited in the waiting list, and last probe. |
|
||||
## - should execute "$@", if user needs '-c' it can run ``bash -c ""`` |
|
||||
|
|
||||
exname="$(basename "$0")" |
|
||||
usage="$exname LOCKLABELS [-k] [FLOCK_OPTIONS] -- [CMD...]" |
|
||||
|
|
||||
verb() { [ -z "$verbose" ] || echo "$@" >&2 ; } |
|
||||
err() { echo "$@" >&2; } |
|
||||
die() { echo "$@" >&2; exit 1; } |
|
||||
|
|
||||
md5_compat() { md5sum | cut -c -32; true; } |
|
||||
|
|
||||
LOCKLABELS= |
|
||||
flock_opts=() |
|
||||
command=() |
|
||||
nonblock= |
|
||||
errcode=1 |
|
||||
timeout= |
|
||||
cmd= |
|
||||
priority=1 |
|
||||
remove_duplicate= |
|
||||
while [ "$1" ]; do |
|
||||
case "$1" in |
|
||||
-h|--help) |
|
||||
echo "$help" |
|
||||
exit 0 |
|
||||
;; |
|
||||
-V|--version) |
|
||||
echo "$version" |
|
||||
exit 0 |
|
||||
;; |
|
||||
-c) |
|
||||
cmd="$2" |
|
||||
shift |
|
||||
;; |
|
||||
-p|--priority) |
|
||||
priority=$2 |
|
||||
shift |
|
||||
;; |
|
||||
-D) |
|
||||
remove_duplicate=true |
|
||||
;; |
|
||||
-k) |
|
||||
kill=yes |
|
||||
;; |
|
||||
-n|--nb|--nonblock) |
|
||||
nonblock=true |
|
||||
;; |
|
||||
-w|--wait|--timeout) |
|
||||
timeout=$2 ## will manage this |
|
||||
shift |
|
||||
;; |
|
||||
-E|--conflict-exit-code) |
|
||||
errcode=$2 ## will manage this |
|
||||
shift |
|
||||
;; |
|
||||
-v|--verbose) |
|
||||
verbose=true ## will manage this |
|
||||
;; |
|
||||
-n|--nb|--nonblock) |
|
||||
nonblock=true ## will manage this |
|
||||
;; |
|
||||
--) |
|
||||
[ "$cmd" ] && die "'--' and '-c' are mutualy exclusive" |
|
||||
shift |
|
||||
command+=("$@") |
|
||||
break 2 |
|
||||
;; |
|
||||
*) |
|
||||
[ -z "$LOCKLABELS" ] && { LOCKLABELS=$1 ; shift ; continue ; } |
|
||||
flock_opts+=("$1") |
|
||||
;; |
|
||||
esac |
|
||||
shift |
|
||||
done |
|
||||
|
|
||||
if [ -z "$LOCKLABELS" ]; then |
|
||||
err "You must provide a lock file as first argument." |
|
||||
err "$usage" |
|
||||
exit 1 |
|
||||
fi |
|
||||
|
|
||||
if [ "$remove_duplicate" ]; then |
|
||||
md5code=$( |
|
||||
if [ "$cmd" ]; then |
|
||||
echo bash -c "$cmd" |
|
||||
else |
|
||||
echo "${command[@]}" |
|
||||
fi | md5_compat) |
|
||||
fi |
|
||||
|
|
||||
|
|
||||
function is_int () { [[ "$1" =~ ^-?[0-9]+$ ]] ; } |
|
||||
|
|
||||
is_pid_alive() { |
|
||||
local pid="$1" |
|
||||
ps --pid "$pid" >/dev/null 2>&1 |
|
||||
} |
|
||||
|
|
||||
|
|
||||
is_pgid_alive() { |
|
||||
local pgid="$1" |
|
||||
[ "$(ps -e -o pgid,pid= | egrep "^ *$pgid ")" ] |
|
||||
} |
|
||||
|
|
||||
|
|
||||
pgid_from_pid() { |
|
||||
local pid="$1" |
|
||||
pgid=$(ps -o pgid= "$pid" 2>/dev/null | egrep -o "[0-9]+") |
|
||||
if ! is_int "$pgid"; then |
|
||||
err "Could not retrieve a valid PGID from PID '$pid' (returned '$pgid')." |
|
||||
return 1 |
|
||||
fi |
|
||||
echo "$pgid" |
|
||||
} |
|
||||
|
|
||||
|
|
||||
ensure_kill() { |
|
||||
local pid="$1" timeout=5 start=$SECONDS kill_count=0 pgid |
|
||||
pgid=$(pgid_from_pid "$pid") |
|
||||
while is_pid_alive "$pid"; do |
|
||||
if is_pgid_alive "$pgid"; then |
|
||||
if [ "$kill_count" -gt 4 ]; then |
|
||||
err "FATAL: duplicate command, GPID=$pgid has resisted kill procedure. Aborting." |
|
||||
return 1 |
|
||||
elif [ "$kill_count" -gt 2 ]; then |
|
||||
err "duplicate command, PGID wouldn't close itself, force kill PGID: kill -9 -- -$pgid" |
|
||||
kill -9 -- "$pgid" |
|
||||
sleep 1 |
|
||||
else |
|
||||
err "duplicate command, Sending SIGKILL to PGID: kill -- -$pgid" |
|
||||
kill -- -"$pgid" |
|
||||
sleep 1 |
|
||||
fi |
|
||||
((kill_count++)) |
|
||||
fi |
|
||||
if [ "$((SECONDS - start))" -gt "$timeout" ]; then |
|
||||
err "timeout reached. $pid" |
|
||||
return 1 |
|
||||
fi |
|
||||
done |
|
||||
return 0 |
|
||||
} |
|
||||
|
|
||||
|
|
||||
acquire_pid_file() { |
|
||||
local label=$1 |
|
||||
lockfile="/var/lock/lockcmd-$label.lock" |
|
||||
mkdir -p /var/run/lockcmd |
|
||||
pidfile="/var/run/lockcmd/$label.pid" |
|
||||
export pidfile |
|
||||
( |
|
||||
verb() { [ -z "$verbose" ] || echo "$exname($label) $pid> $@" >&2 ; } |
|
||||
err() { echo "$exname($label) $pid> $@" >&2; } |
|
||||
|
|
||||
start=$SECONDS |
|
||||
kill_count=0 |
|
||||
pgid_not_alive_count=0 |
|
||||
while true; do |
|
||||
## ask for lock on $lockfile (fd 200) |
|
||||
if ! flock -n -x 200; then |
|
||||
verb "Couldn't acquire primary lock... (elapsed $((SECONDS - start)))" |
|
||||
else |
|
||||
verb "Acquired lock '$label' on pidfile, inspecting pidfile." |
|
||||
if ! [ -e "$pidfile" ]; then |
|
||||
verb "No pidfile, inscribing my PID" |
|
||||
echo -e "$pid $priority" > "$pidfile" |
|
||||
exit 0 |
|
||||
fi |
|
||||
|
|
||||
if ! content=$(cat "$pidfile" 2>/dev/null); then |
|
||||
err "Can't read $pidfile" |
|
||||
exit 1 |
|
||||
fi |
|
||||
read opid opriority < <(echo "$content" | head -n 1) |
|
||||
opriority=${opriority:-1} |
|
||||
verb "Previous PID is $opid, with priority $opriority" |
|
||||
if ! is_pid_alive "$opid"; then |
|
||||
err "Ignoring stale PID $opid" |
|
||||
echo -e "$pid $priority" > "$pidfile" |
|
||||
exit 0 |
|
||||
else |
|
||||
if [ "$remove_duplicate" ]; then ## Add my pid and md5 if not already there. |
|
||||
same_cmd_pids=$( |
|
||||
echo "$content" | tail -n +1 | \ |
|
||||
egrep "^[0-9]+ $md5code$" 2>/dev/null | \ |
|
||||
cut -f 1 -d " ") |
|
||||
same_pids=() |
|
||||
found_myself= |
|
||||
for spid in $same_cmd_pids; do |
|
||||
if [ "$spid" == "$pid" ]; then |
|
||||
found_myself=true |
|
||||
continue |
|
||||
fi |
|
||||
same_pids+=("$spid") |
|
||||
done |
|
||||
[ "$found_myself" ] || echo "$pid $md5code" >> "$pidfile" |
|
||||
fi |
|
||||
flock -u 200 ## reopen the lock to give a chance to the other process to remove the pidfile. |
|
||||
if [ "$remove_duplicate" ]; then ## Add my pid and md5 if not already there. |
|
||||
for spid in "${same_pids[@]}"; do |
|
||||
if ! ensure_kill "$spid"; then |
|
||||
err "Couldn't kill previous duplicate command." |
|
||||
exit 1 |
|
||||
fi |
|
||||
done |
|
||||
fi |
|
||||
pgid=$(pgid_from_pid "$opid") |
|
||||
verb "PGID of previous PID is $pgid" |
|
||||
if is_pgid_alive "$pgid"; then |
|
||||
verb "Previous PGID is still alive" |
|
||||
if [ "$kill" ] && [ "$priority" -ge "$opriority" ]; then |
|
||||
if [ "$kill_count" -gt 4 ]; then |
|
||||
err "$pid>FATAL: GPID=$pgid has resisted kill procedure. Aborting." |
|
||||
exit 1 |
|
||||
elif [ "$kill_count" -gt 2 ]; then |
|
||||
err "PGID wouldn't close itself, force kill PGID: kill -9 -- -$pgid" >&2 |
|
||||
kill -9 -- "$pgid" |
|
||||
sleep 1 |
|
||||
else |
|
||||
err "Sending SIGKILL to PGID: kill -- -$pgid" >&2 |
|
||||
kill -- -"$pgid" |
|
||||
sleep 1 |
|
||||
fi |
|
||||
((kill_count++)) |
|
||||
else |
|
||||
if [ "$nonblock" ]; then |
|
||||
verb "Nonblock options forces exit." |
|
||||
exit 1 |
|
||||
else |
|
||||
verb "Couldn't acquire Lock... (elapsed $((SECONDS - start)))" |
|
||||
fi |
|
||||
fi |
|
||||
else |
|
||||
if [ "$pgid_not_alive_count" -gt 4 ]; then |
|
||||
verb "$pid>A lock exists for label $label, but PGID:$pgid in it isn't alive while child $pid is ?!?." |
|
||||
err "$pid>Can't force seizing the lock." >&2 |
|
||||
exit 1 |
|
||||
fi |
|
||||
((pgid_not_alive_count++)) |
|
||||
fi |
|
||||
fi |
|
||||
fi |
|
||||
|
|
||||
if [ "$timeout" ] && [ "$timeout" -lt "$((SECONDS - start))" ]; then |
|
||||
err "Timeout reached (${timeout}s) while waiting for lock on $label" |
|
||||
exit "$errcode" |
|
||||
fi |
|
||||
sleep 1 |
|
||||
done |
|
||||
) 200> "$lockfile" |
|
||||
} |
|
||||
|
|
||||
remove_pid_file() { |
|
||||
local label=$1 |
|
||||
lockfile="/var/lock/lockcmd-$label.lock" |
|
||||
mkdir -p /var/run/lockcmd |
|
||||
pidfile="/var/run/lockcmd/$label.pid" |
|
||||
|
|
||||
( |
|
||||
verb() { [ -z "$verbose" ] || echo "$exname($label) $pid> $@" >&2 ; } |
|
||||
err() { echo "$exname($label) $pid> $@" >&2; } |
|
||||
verb "Asking lock to delete $pidfile." |
|
||||
timeout=5 |
|
||||
start=$SECONDS |
|
||||
while true; do |
|
||||
## ask for lock on $lockfile (fd 200) |
|
||||
if ! flock -n -x 200; then |
|
||||
verb "Couldn't acquire primary lock... (elapsed $((SECONDS - start)))" |
|
||||
else |
|
||||
verb "Acquired lock '$label' on pidfile." |
|
||||
if ! [ -e "$pidfile" ]; then |
|
||||
verb "No more pidfile, somebody deleted for us ?1?" |
|
||||
exit 1 |
|
||||
fi |
|
||||
if ! content=$(cat "$pidfile" 2>/dev/null); then |
|
||||
err "Can't read $pidfile" |
|
||||
exit 1 |
|
||||
fi |
|
||||
read opid opriority < <(echo "$content" | head -n 1) |
|
||||
opriority=${opriority:-1} |
|
||||
if [ "$opid" == "$pid" ]; then |
|
||||
verb "Deleted pidfile. Releasing lock." |
|
||||
rm -f "$pidfile" |
|
||||
exit 0 |
|
||||
else |
|
||||
verb "Removing duplicates in pidfile. Releasing lock." |
|
||||
[ "$remove_duplicate" ] && sed -ri "/^$pid $md5code$/d" "$pidfile" |
|
||||
exit 0 |
|
||||
fi |
|
||||
fi |
|
||||
if [ "$timeout" ] && [ "$timeout" -lt "$((SECONDS - start))" ]; then |
|
||||
err "Timeout reached (${timeout}s) while waiting for lock on $label" |
|
||||
exit "$errcode" |
|
||||
fi |
|
||||
sleep 1 |
|
||||
done |
|
||||
) 200> "$lockfile" |
|
||||
|
|
||||
} |
|
||||
|
|
||||
|
|
||||
## appends a command to the signal handler functions |
|
||||
# |
|
||||
# example: trap_add EXIT,INT close_ssh "$ip" |
|
||||
trap_add() { |
|
||||
local sigs="$1" sig cmd old_cmd |
|
||||
shift || { |
|
||||
echo "${FUNCNAME} usage error" >&2 |
|
||||
return 1 |
|
||||
} |
|
||||
cmd="$@" |
|
||||
while IFS="," read -d "," sig; do |
|
||||
prev_cmd="$(trap -p "$sig")" |
|
||||
if [ "$prev_cmd" ]; then |
|
||||
new_cmd="${prev_cmd#trap -- \'}" |
|
||||
new_cmd="${new_cmd%\' "$sig"};$cmd" |
|
||||
else |
|
||||
new_cmd="$cmd" |
|
||||
fi |
|
||||
trap -- "$new_cmd" "$sig" || { |
|
||||
echo "unable to add command '$@' to trap $sig" >&2 ; |
|
||||
return 1 |
|
||||
} |
|
||||
done < <(echo "$sigs,") |
|
||||
} |
|
||||
|
|
||||
remove_all_pid_file() { |
|
||||
while read -d "," label; do |
|
||||
{ |
|
||||
remove_pid_file "$label" || err "Could not delete $label" |
|
||||
} & |
|
||||
done < <(echo "$LOCKLABELS,") |
|
||||
wait |
|
||||
} |
|
||||
|
|
||||
## |
|
||||
## Code |
|
||||
## |
|
||||
|
|
||||
pid="$$" |
|
||||
|
|
||||
trap_add EXIT "remove_all_pid_file" |
|
||||
while read -d "," label; do |
|
||||
acquire_pid_file "$label" || exit "$errcode" & |
|
||||
done < <(echo "$LOCKLABELS,") |
|
||||
wait |
|
||||
if [ "$cmd" ]; then |
|
||||
bash -c "$cmd" |
|
||||
else |
|
||||
"${command[@]}" |
|
||||
fi |
|
||||
errlvl="$?" |
|
||||
exit "$?" |
|
@ -1,20 +1,39 @@ |
|||||
#!/bin/bash |
#!/bin/bash |
||||
## Should be executable N time in a row with same result. |
|
||||
|
|
||||
|
. lib/common |
||||
|
|
||||
set -e |
set -e |
||||
|
|
||||
cron_config_hash() { |
|
||||
debug "Adding config hash to enable recreating upon config change." |
|
||||
config_hash=$({ |
|
||||
find "$SERVICE_CONFIGSTORE/etc/cron"{,.hourly,.weekly,.daily,.monthly} \ |
|
||||
-type f -exec md5sum {} \; |
|
||||
} | md5_compat) || exit 1 |
|
||||
init-config-add " |
|
||||
$MASTER_BASE_SERVICE_NAME: |
|
||||
labels: |
|
||||
- compose.config_hash=$config_hash |
|
||||
" |
|
||||
|
root_crontab="$SERVICE_CONFIGSTORE/etc/crontabs/root" |
||||
|
|
||||
|
cron_content=$(set pipefail; cron:entries | tr '\0' '\n') || { |
||||
|
err "Failed to make cron entries" >&2 |
||||
|
exit 1 |
||||
} |
} |
||||
|
if [ -z "${cron_content::1}" ]; then |
||||
|
err "Unexpected empty scheduled command list." |
||||
|
exit 1 |
||||
|
fi |
||||
|
if [ -e "$root_crontab" ]; then |
||||
|
if ! [ -f "$root_crontab" ]; then |
||||
|
err "Destination '$root_crontab' exists and is not a file." |
||||
|
exit 1 |
||||
|
fi |
||||
|
current_content=$(cat "$root_crontab") |
||||
|
if [ "$current_content" = "$cron_content" ]; then |
||||
|
info "Cron entry already up to date." |
||||
|
exit 0 |
||||
|
fi |
||||
|
fi |
||||
|
|
||||
|
|
||||
|
if ! [ -d "${root_crontab%/*}" ]; then |
||||
|
mkdir -p "${root_crontab%/*}" |
||||
|
fi |
||||
|
|
||||
|
printf "%s\n" "$cron_content" > "$root_crontab" |
||||
|
## Busybox cron uses cron.update file to rescan new cron entries |
||||
|
## cf: https://git.busybox.net/busybox/tree/miscutils/crond.c#n1089 |
||||
|
touch "${root_crontab%/*}/cron.update" |
||||
|
|
||||
cron_config_hash || exit 1 |
|
||||
|
info "Cron entry updated ${GREEN}successfully${NORMAL}." |
@ -0,0 +1,210 @@ |
|||||
|
# -*- mode: shell-script -*- |
||||
|
|
||||
|
cron:get_config() { |
||||
|
local cfg="$1" |
||||
|
local cache_file="$CACHEDIR/$FUNCNAME.cache.$(H "$@")" \ |
||||
|
type value |
||||
|
if [ -e "$cache_file" ]; then |
||||
|
#debug "$FUNCNAME: SESSION cache hit $1" |
||||
|
cat "$cache_file" |
||||
|
return 0 |
||||
|
fi |
||||
|
type=$(e "$cfg" | shyaml -q get-type 2>/dev/null) || true |
||||
|
case "$type" in |
||||
|
"sequence") |
||||
|
while read-0-err E s; do |
||||
|
cron:get_config "$s" || return 1 |
||||
|
done < <(e "$cfg" | p-err shyaml -q get-values-0 -y) |
||||
|
if [ "$E" != 0 ]; then |
||||
|
err "Failed to parse sequence while reading config." |
||||
|
return 1 |
||||
|
fi |
||||
|
;; |
||||
|
"struct") |
||||
|
while read-0-err E k v; do |
||||
|
while read-0-err E1 schedule lock_opts title command; do |
||||
|
if [ -n "$title" ]; then |
||||
|
err "Unexpected label specified in struct." |
||||
|
echo " Using struct, the key will be used as label." >&2 |
||||
|
echo " So you can't specify a label inner value(s)." >&2 |
||||
|
return 1 |
||||
|
fi |
||||
|
p0 "$schedule" "$lock_opts" "$k" "$command" |
||||
|
done < <(p-err cron:get_config "$v") |
||||
|
if [ "$E1" != 0 ]; then |
||||
|
err "Failed to parse value of key '$k' in struct config." |
||||
|
return 1 |
||||
|
fi |
||||
|
done < <(e "$cfg" | p-err shyaml -q key-values-0 -y) |
||||
|
if [ "$E" != 0 ]; then |
||||
|
err "Failed to parse key values while reading config." |
||||
|
return 1 |
||||
|
fi |
||||
|
;; |
||||
|
"str") |
||||
|
## examples: |
||||
|
## (*/5 * * * *) {-k} bash -c "foo bla bla" |
||||
|
## (@daily) {-p 10 -D} bash -c "foo bla bla" |
||||
|
value=$(e "$cfg" | yaml_get_values) || { |
||||
|
err "Failed to parse str while reading config." |
||||
|
return 1 |
||||
|
} |
||||
|
if ! [[ "$value" =~ ^[[:space:]]*([a-zA-Z0-9_-]+)?[[:space:]]*"("([^\)]+)")"[[:space:]]+\{([^\}]*)\}[[:space:]]*(.*)$ ]]; then |
||||
|
err "Invalid syntax, expected: 'LABEL (SCHEDULE) {LOCK_OPTIONS} COMMAND'." |
||||
|
echo " With LABEL being a possible empty string." >&2 |
||||
|
echo " Received: '$value'" >&2 |
||||
|
return 1 |
||||
|
fi |
||||
|
printf "%s\0" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}" "${BASH_REMATCH[1]}" "${BASH_REMATCH[4]}" |
||||
|
;; |
||||
|
NoneType|"") |
||||
|
: |
||||
|
;; |
||||
|
*) |
||||
|
value=$(e "$cfg" | yaml_get_interpret) || { |
||||
|
err "Failed to parse value while reading config." |
||||
|
return 1 |
||||
|
} |
||||
|
if [[ "$value" == "$cfg" ]]; then |
||||
|
err "Unrecognized type '$type'." |
||||
|
return 1 |
||||
|
fi |
||||
|
cron:get_config "$value" || return 1 |
||||
|
;; |
||||
|
esac > "$cache_file.tmp" |
||||
|
|
||||
|
mv -v "$cache_file.tmp" "$cache_file" >&2 |
||||
|
|
||||
|
## if cache file is empty, this is an error |
||||
|
if [ ! -s "$cache_file" ]; then |
||||
|
err "Unexpected empty relation options." |
||||
|
echo " - check that you don't overwrite default options with an empty relation" >&2 |
||||
|
echo " - check your charm is setting default options" >&2 |
||||
|
echo " Original value: '$cfg'" >&2 |
||||
|
rm -f "$cache_file" |
||||
|
return 1 |
||||
|
fi |
||||
|
|
||||
|
cat "$cache_file" |
||||
|
} |
||||
|
|
||||
|
|
||||
|
cron:entries_from_service() { |
||||
|
local service="$1" relation_cfg="$2" \ |
||||
|
cache_file="$CACHEDIR/$FUNCNAME.cache.$(H "$@")" \ |
||||
|
label schedule lock_opts title command full_label |
||||
|
if [ -e "$cache_file" ]; then |
||||
|
#debug "$FUNCNAME: SESSION cache hit $1" |
||||
|
cat "$cache_file" |
||||
|
return 0 |
||||
|
fi |
||||
|
|
||||
|
## XXXvlab; should factorize this with compose-core to setup relation env |
||||
|
export BASE_SERVICE_NAME=$service |
||||
|
MASTER_BASE_SERVICE_NAME=$(get_top_master_service_for_service "$service") || return 1 |
||||
|
MASTER_BASE_CHARM_NAME=$(get_service_charm "$MASTER_BASE_SERVICE_NAME") || return 1 |
||||
|
BASE_CHARM_NAME=$(get_service_charm "$service") || return 1 |
||||
|
BASE_CHARM_PATH=$(charm.get_dir "$BASE_CHARM_NAME") || return 1 |
||||
|
export MASTER_BASE_{CHARM,SERVICE}_NAME BASE_CHARM_{PATH,NAME} |
||||
|
|
||||
|
label="launch-$service" |
||||
|
while read-0-err E schedule lock_opts title command; do |
||||
|
lock_opts=($lock_opts) |
||||
|
if ! [[ "$schedule" =~ ^(([0-9/,*-]+[[:space:]]+){4,4}[0-9/,*-]+|@[a-z]+)$ ]]; then |
||||
|
err "Unrecognized schedule '$schedule'." |
||||
|
return 1 |
||||
|
fi |
||||
|
## Check that label is only a simple identifier |
||||
|
if ! [[ "$title" =~ ^[a-zA-Z0-9_-]*$ ]]; then |
||||
|
err "Unexpected title '$title', please use only alphanumeric, underscore or dashes (can be empty)." |
||||
|
return 1 |
||||
|
fi |
||||
|
if ! lock_opts=($(cron:lock_opts "${lock_opts[@]}")); then |
||||
|
err "Failed to parse lock options." |
||||
|
return 1 |
||||
|
fi |
||||
|
if [ -z "$command" ]; then |
||||
|
err "Unexpected empty command." |
||||
|
return 1 |
||||
|
fi |
||||
|
|
||||
|
full_label="$label" |
||||
|
[ -n "$title" ] && full_label+="-$title" |
||||
|
|
||||
|
## escape double-quotes |
||||
|
command=${command//\"/\\\"} |
||||
|
|
||||
|
p0 "$schedule lock ${full_label} ${lock_opts[*]} -c \"$command\" 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S %Z\"), \$0; fflush(); }' >> /var/log/cron/${full_label}_script.log" |
||||
|
|
||||
|
done < <(p-err cron:get_config "$relation_cfg") > "$cache_file" |
||||
|
if [ "$E" != 0 ]; then |
||||
|
rm -f "$cache_file" |
||||
|
err "Failed to get ${DARKYELLOW}$service${NORMAL}--${DARKBLUE}schedule-command${NORMAL}-->${DARKYELLOW}$SERVICE_NAME${NORMAL}'s config." |
||||
|
return 1 |
||||
|
fi |
||||
|
|
||||
|
cat "$cache_file" |
||||
|
} |
||||
|
|
||||
|
|
||||
|
cron:lock_opts() { |
||||
|
local cache_file="$CACHEDIR/$FUNCNAME.cache.$(H "$@")" \ |
||||
|
label schedule lock_opts title command full_label |
||||
|
if [ -e "$cache_file" ]; then |
||||
|
#debug "$FUNCNAME: SESSION cache hit $1" |
||||
|
cat "$cache_file" |
||||
|
return 0 |
||||
|
fi |
||||
|
lock_opts=() |
||||
|
while [ "$1" ]; do |
||||
|
case "$1" in |
||||
|
"-D"|"-k") |
||||
|
lock_opts+=("$1") |
||||
|
;; |
||||
|
"-p") |
||||
|
## check that the value is a number |
||||
|
if ! [[ "$2" =~ ^[0-9]+$ ]]; then |
||||
|
err "Unexpected value for priority '$2' (expected an integer)." |
||||
|
return 1 |
||||
|
fi |
||||
|
lock_opts+=(-p "$2") |
||||
|
shift |
||||
|
;; |
||||
|
"-*"|"--*") |
||||
|
err "Unrecognized lock option '$1'." |
||||
|
return 1 |
||||
|
;; |
||||
|
*) |
||||
|
err "Unexpected lock argument '$1'." |
||||
|
return 1 |
||||
|
;; |
||||
|
esac |
||||
|
shift |
||||
|
done |
||||
|
printf "%s\n" "${lock_opts[@]}" > "$cache_file" |
||||
|
|
||||
|
cat "$cache_file" |
||||
|
} |
||||
|
|
||||
|
|
||||
|
cron:entries() { |
||||
|
local cache_file="$CACHEDIR/$FUNCNAME.cache.$(H "$SERVICE_NAME" "$ALL_RELATIONS")" \ |
||||
|
s rn ts rc td |
||||
|
if [ -e "$cache_file" ]; then |
||||
|
#debug "$FUNCNAME: SESSION cache hit $1" |
||||
|
cat "$cache_file" |
||||
|
return 0 |
||||
|
fi |
||||
|
|
||||
|
if [ -z "$ALL_RELATIONS" ]; then |
||||
|
err "Expected \$ALL_RELATIONS to be set." |
||||
|
exit 1 |
||||
|
fi |
||||
|
export TARGET_SERVICE_NAME=$SERVICE_NAME |
||||
|
while read-0 service relation_cfg; do |
||||
|
debug "service: '$service' relation_cfg: '$relation_cfg'" |
||||
|
cron:entries_from_service "$service" "$relation_cfg" || return 1 |
||||
|
done < <(get_service_incoming_relations "$SERVICE_NAME" "schedule-command") > "$cache_file" |
||||
|
cat "$cache_file" |
||||
|
} |
||||
|
export -f cron:entries |
@ -0,0 +1,155 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
exname=$(basename $0) |
||||
|
|
||||
|
compose_core=$(which compose-core) || { |
||||
|
echo "Requires compose-core executable to be in \$PATH." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
fetch-def() { |
||||
|
local path="$1" fname="$2" |
||||
|
( . "$path" 1>&2 || { |
||||
|
echo "Failed to load '$path'." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
declare -f "$fname" |
||||
|
) |
||||
|
} |
||||
|
|
||||
|
prefix_cmd=" |
||||
|
. /etc/shlib |
||||
|
|
||||
|
include common |
||||
|
include parse |
||||
|
|
||||
|
. ../lib/common |
||||
|
|
||||
|
$(fetch-def "$compose_core" yaml_get_values) |
||||
|
$(fetch-def "$compose_core" yaml_get_interpret) |
||||
|
$(fetch-def "$compose_core" read-0-err) |
||||
|
$(fetch-def "$compose_core" p-err) |
||||
|
$(fetch-def "$compose_core" expand_vars) |
||||
|
|
||||
|
SERVICE_NAME='bar' |
||||
|
|
||||
|
" || { |
||||
|
echo "Couldn't build prefix cmd" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
# mock |
||||
|
cfg-get-value() { |
||||
|
local key="$1" |
||||
|
shyaml get-value "$key" 2>/dev/null |
||||
|
} |
||||
|
export -f cfg-get-value |
||||
|
|
||||
|
yaml_get_interpret() { |
||||
|
shyaml get-value |
||||
|
} |
||||
|
export -f yaml_get_interpret |
||||
|
|
||||
|
get_top_master_service_for_service() { |
||||
|
local service="$1" |
||||
|
echo "$service" |
||||
|
} |
||||
|
export -f get_top_master_service_for_service |
||||
|
|
||||
|
get_service_charm() { |
||||
|
local service="$1" |
||||
|
echo "$service" |
||||
|
} |
||||
|
export -f get_service_charm |
||||
|
|
||||
|
export CACHEDIR=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
export state_tmpdir=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
trap "rm -rf \"$state_tmpdir\"" EXIT |
||||
|
trap "rm -rf \"$CACHEDIR\"" EXIT |
||||
|
|
||||
|
## |
||||
|
## Tests |
||||
|
## |
||||
|
|
||||
|
try " |
||||
|
cron:entries_from_service 'foo' ''" |
||||
|
is errlvl 1 |
||||
|
is err reg "Error:.*ailed to get.*." |
||||
|
is err reg "Error:.*empty.*." |
||||
|
is out '' TRIM |
||||
|
|
||||
|
try " |
||||
|
cron:entries_from_service 'foo' ' |
||||
|
(0 0 * * *) {XX} dc run --rm foo |
||||
|
'" "wrong lock args" |
||||
|
is errlvl 1 |
||||
|
is err reg "Error:.*lock argument.*." |
||||
|
is err reg "Error:.*parse lock.*." |
||||
|
is out '' TRIM |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
cron:entries_from_service 'foo' ' |
||||
|
(0 0 * * * *) {} dc run --rm foo |
||||
|
'" "wrong schedule" |
||||
|
is errlvl 1 |
||||
|
is err reg "Error:.*schedule.*" |
||||
|
is out '' TRIM |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
cron:entries_from_service 'foo' ' |
||||
|
(0 0 * * *) {} |
||||
|
'" "wrong command" |
||||
|
is errlvl 1 |
||||
|
is err reg "Error:.*empty command.*" |
||||
|
is out '' TRIM |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:entries_from_service 'foo' ' |
||||
|
(0 0 * * *) {-p 10 -k} dc run --rm foo |
||||
|
' | tr '\0' '\n'" "one command no label" |
||||
|
noerror |
||||
|
is out "\ |
||||
|
0 0 * * * lock launch-foo -p 10 -k -c \"dc run --rm foo\" 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S %Z\"), \$0; fflush(); }' >> /var/log/cron/launch-foo_script.log\ |
||||
|
" TRIM |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:entries_from_service 'foo' ' |
||||
|
wiz: (0 0 * * *) {-p 10 -k} dc run --rm foo |
||||
|
' | tr '\0' '\n'" "one command with label" |
||||
|
noerror |
||||
|
is out "\ |
||||
|
0 0 * * * lock launch-foo-wiz -p 10 -k -c \"dc run --rm foo\" 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S %Z\"), \$0; fflush(); }' >> /var/log/cron/launch-foo-wiz_script.log\ |
||||
|
" TRIM |
||||
|
|
||||
|
|
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:entries_from_service 'foo' ' |
||||
|
wiz: (0 0 * * *) {-p 10 -k} dc run --rm foo |
||||
|
bam: (@daily) {-p 10 -D -k} dc run --rm foo --hop |
||||
|
|
||||
|
' | tr '\0' '\n'" "multi command with label" |
||||
|
noerror |
||||
|
is out "\ |
||||
|
0 0 * * * lock launch-foo-wiz -p 10 -k -c \"dc run --rm foo\" 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S %Z\"), \$0; fflush(); }' >> /var/log/cron/launch-foo-wiz_script.log |
||||
|
@daily lock launch-foo-bam -p 10 -D -k -c \"dc run --rm foo --hop\" 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S %Z\"), \$0; fflush(); }' >> /var/log/cron/launch-foo-bam_script.log\ |
||||
|
" TRIM |
||||
|
|
||||
|
|
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:entries_from_service 'foo' '!var-expand |
||||
|
(0 0 * * *) {-p 10 -k} dc run --rm \$BASE_SERVICE_NAME \$MASTER_BASE_SERVICE_NAME |
||||
|
' | tr '\0' '\n'" "using relation's var" |
||||
|
noerror |
||||
|
is out "\ |
||||
|
0 0 * * * lock launch-foo -p 10 -k -c \"dc run --rm foo foo\" 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S %Z\"), \$0; fflush(); }' >> /var/log/cron/launch-foo_script.log" TRIM |
||||
|
|
@ -0,0 +1,151 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
exname=$(basename $0) |
||||
|
|
||||
|
compose_core=$(which compose-core) || { |
||||
|
echo "Requires compose-core executable to be in \$PATH." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
fetch-def() { |
||||
|
local path="$1" fname="$2" |
||||
|
( . "$path" 1>&2 || { |
||||
|
echo "Failed to load '$path'." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
declare -f "$fname" |
||||
|
) |
||||
|
} |
||||
|
|
||||
|
prefix_cmd=" |
||||
|
. /etc/shlib |
||||
|
|
||||
|
include common |
||||
|
include parse |
||||
|
|
||||
|
. ../lib/common |
||||
|
|
||||
|
$(fetch-def "$compose_core" yaml_get_values) |
||||
|
$(fetch-def "$compose_core" yaml_get_interpret) |
||||
|
$(fetch-def "$compose_core" read-0-err) |
||||
|
$(fetch-def "$compose_core" p-err) |
||||
|
$(fetch-def "$compose_core" expand_vars) |
||||
|
|
||||
|
" || { |
||||
|
echo "Couldn't build prefix cmd" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
# mock |
||||
|
cfg-get-value() { |
||||
|
local key="$1" |
||||
|
shyaml get-value "$key" 2>/dev/null |
||||
|
} |
||||
|
export -f cfg-get-value |
||||
|
|
||||
|
yaml_get_interpret() { |
||||
|
shyaml get-value |
||||
|
} |
||||
|
export -f yaml_get_interpret |
||||
|
|
||||
|
|
||||
|
export CACHEDIR=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
export state_tmpdir=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
trap "rm -rf \"$state_tmpdir\"" EXIT |
||||
|
trap "rm -rf \"$CACHEDIR\"" EXIT |
||||
|
|
||||
|
## |
||||
|
## Tests |
||||
|
## |
||||
|
|
||||
|
try " |
||||
|
cron:get_config ''" |
||||
|
is errlvl 1 |
||||
|
is err reg 'Error: .*empty.*' |
||||
|
is out '' |
||||
|
|
||||
|
try " |
||||
|
cron:get_config 'xxx'" |
||||
|
is errlvl 1 |
||||
|
is err reg 'Error: .*syntax.*' |
||||
|
is out '' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config '(@daily) {} /bin/true' | tr '\0' ':' |
||||
|
" "str simple example without label" |
||||
|
noerror |
||||
|
is out "@daily:::/bin/true:" |
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config 'foo (@daily) {} /bin/true' | tr '\0' ':' |
||||
|
" "str simple example with label" |
||||
|
noerror |
||||
|
is out "@daily::foo:/bin/true:" |
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config 'foo (@daily) {-p 10 -D} /bin/true' | tr '\0' ':' |
||||
|
" "str simple example with lock options" |
||||
|
noerror |
||||
|
is out "@daily:-p 10 -D:foo:/bin/true:" |
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config 'foo (*/2 * * * *) {-p 10 -D} /bin/true' | tr '\0' ':' |
||||
|
" "str simple example with all fields" |
||||
|
noerror |
||||
|
is out "*/2 * * * *:-p 10 -D:foo:/bin/true:" |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config '- foo (*/2 * * * *) {-p 10 -D} /bin/true' | tr '\0' ':' |
||||
|
" "list 1 elt with str simple example with all fields" |
||||
|
noerror |
||||
|
is out "*/2 * * * *:-p 10 -D:foo:/bin/true:" |
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config ' |
||||
|
- foo (*/2 * * * *) {-p 10 -D} /bin/true |
||||
|
- bar (*/3 * * * *) {-p 10 -D -k} /bin/false |
||||
|
|
||||
|
' | tr '\0' ':' |
||||
|
" "list 2 elts with str simple example with all fields" |
||||
|
noerror |
||||
|
is out "*/2 * * * *:-p 10 -D:foo:/bin/true:*/3 * * * *:-p 10 -D -k:bar:/bin/false:" |
||||
|
|
||||
|
try " |
||||
|
set pipefail && |
||||
|
cron:get_config ' |
||||
|
foo: (*/2 * * * *) {-p 10 -D} /bin/true |
||||
|
bar: (*/3 * * * *) {-p 10 -D -k} /bin/false |
||||
|
|
||||
|
' | tr '\0' ':' |
||||
|
" "struct 2 elts with str simple example with all fields" |
||||
|
noerror |
||||
|
is out "*/2 * * * *:-p 10 -D:foo:/bin/true:*/3 * * * *:-p 10 -D -k:bar:/bin/false:" |
||||
|
|
||||
|
|
||||
|
|
||||
|
try " |
||||
|
cron:get_config '!!float 3.7' |
||||
|
" "bad type" |
||||
|
is errlvl 1 |
||||
|
is err reg 'Error: .*type.*' |
||||
|
is out '' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
export FOO=bar |
||||
|
set pipefail && |
||||
|
cron:get_config '!var-expand (*/2 * * * *) {-p 10 -D} \"/bin/\${FOO}\"' | tr '\0' ':' |
||||
|
" "var-expand" |
||||
|
is errlvl 0 |
||||
|
is err '' |
||||
|
is out '*/2 * * * *:-p 10 -D::"/bin/bar":' |
||||
|
|
||||
|
|
@ -0,0 +1,90 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
exname=$(basename $0) |
||||
|
|
||||
|
compose_core=$(which compose-core) || { |
||||
|
echo "Requires compose-core executable to be in \$PATH." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
fetch-def() { |
||||
|
local path="$1" fname="$2" |
||||
|
( . "$path" 1>&2 || { |
||||
|
echo "Failed to load '$path'." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
declare -f "$fname" |
||||
|
) |
||||
|
} |
||||
|
|
||||
|
prefix_cmd=" |
||||
|
. /etc/shlib |
||||
|
|
||||
|
include common |
||||
|
include parse |
||||
|
|
||||
|
. ../lib/common |
||||
|
|
||||
|
$(fetch-def "$compose_core" yaml_get_values) |
||||
|
$(fetch-def "$compose_core" yaml_get_interpret) |
||||
|
$(fetch-def "$compose_core" read-0-err) |
||||
|
$(fetch-def "$compose_core" p-err) |
||||
|
|
||||
|
" || { |
||||
|
echo "Couldn't build prefix cmd" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
# mock |
||||
|
cfg-get-value() { |
||||
|
local key="$1" |
||||
|
shyaml get-value "$key" 2>/dev/null |
||||
|
} |
||||
|
export -f cfg-get-value |
||||
|
|
||||
|
yaml_get_interpret() { |
||||
|
shyaml get-value |
||||
|
} |
||||
|
export -f yaml_get_interpret |
||||
|
|
||||
|
|
||||
|
export CACHEDIR=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
export state_tmpdir=$(mktemp -d -t tmp.XXXXXXXXXX) |
||||
|
trap "rm -rf \"$state_tmpdir\"" EXIT |
||||
|
trap "rm -rf \"$CACHEDIR\"" EXIT |
||||
|
|
||||
|
## |
||||
|
## Tests |
||||
|
## |
||||
|
|
||||
|
try " |
||||
|
cron:lock_opts ''" |
||||
|
noerror |
||||
|
is out '' TRIM |
||||
|
|
||||
|
try " |
||||
|
cron:lock_opts '--XXX' |
||||
|
" |
||||
|
is errlvl 1 |
||||
|
is err reg 'Error: .*argument.*--XXX.*' |
||||
|
is out '' |
||||
|
|
||||
|
try " |
||||
|
cron:lock_opts -p X |
||||
|
" |
||||
|
is errlvl 1 |
||||
|
is err reg 'Error: .*priority.*X.*integer.*' |
||||
|
is out '' |
||||
|
|
||||
|
|
||||
|
try " |
||||
|
cron:lock_opts -p 10 -k -D |
||||
|
" |
||||
|
noerror |
||||
|
is out "\ |
||||
|
-p |
||||
|
10 |
||||
|
-k |
||||
|
-D" TRIM |
||||
|
|
||||
|
|
@ -0,0 +1,100 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
set -eux |
||||
|
|
||||
|
NTFY_BROKER="${NTFY_BROKER:-core-01.0k.io}" |
||||
|
|
||||
|
|
||||
|
## Uncipher ntfy key to destination |
||||
|
|
||||
|
umask 077 |
||||
|
ntfy_key_ciphered="src/etc/ssh/ntfy-key" |
||||
|
if [ ! -f "$ntfy_key_ciphered" ]; then |
||||
|
echo "Error: ciphered ntfy key not found" >&2 |
||||
|
exit 1 |
||||
|
fi |
||||
|
|
||||
|
ntfy_key_dest=/etc/ssh/ntfy-key |
||||
|
if [ ! -f "$ntfy_key_dest" ]; then |
||||
|
cat "$ntfy_key_ciphered" | |
||||
|
gpg -d --batch --yes --passphrase 'uniquepass' > "$ntfy_key_dest" || { |
||||
|
echo "Error while unpacking ntfy key to '${ntfy_key_dest}'" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
fi |
||||
|
|
||||
|
|
||||
|
## Request token to ntfy server and add to config file |
||||
|
|
||||
|
known_host="/root/.ssh/known_hosts" |
||||
|
if ! ssh-keygen -F "$NTFY_BROKER" -f "$known_host" >/dev/null; then |
||||
|
ssh-keyscan -H "$NTFY_BROKER" >> "$known_host" || { |
||||
|
echo "Error while adding '$NTFY_BROKER' to known_hosts" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
fi |
||||
|
|
||||
|
config_file="/etc/ntfy/ntfy.conf" |
||||
|
mkdir -p "${config_file%/*}" |
||||
|
if ! [ -f "$config_file" ]; then |
||||
|
touch "$config_file" || { |
||||
|
echo "Error: couldn’t create config file '$config_file'" >&2; |
||||
|
exit 1 |
||||
|
} |
||||
|
fi |
||||
|
|
||||
|
LOGIN="" |
||||
|
PASSWORD="" |
||||
|
source "$config_file" || { |
||||
|
echo "Error: couldn't source config file '$config_file'" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
## Note that we require the forcing of stdin to /dev/null to avoid |
||||
|
## the rest of the script to be vacuumed by the ssh command. |
||||
|
## This effect will only happen when launching this script in special |
||||
|
## conditions involving stdin. |
||||
|
cred=$(ssh -i "$ntfy_key_dest" ntfy@"${NTFY_BROKER}" \ |
||||
|
request-token "$LOGIN" "$PASSWORD" </dev/null) || { |
||||
|
echo "Error while requesting token to ntfy server" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
## XXXvlab: ideally it should be received from the last call |
||||
|
server="https://ntfy.0k.io/" |
||||
|
login=$(printf "%q" "${cred%$'\n'*}") |
||||
|
password=$(printf "%q" "${cred#*$'\n'}") |
||||
|
|
||||
|
## check if password doesn't contain '%' |
||||
|
|
||||
|
for var in server login password; do |
||||
|
if [ "${!var}" == "''" ] || [[ "${!var}" == *$'\n'* ]]; then |
||||
|
echo "Error: empty or invalid multi-line values retrieved for '$var'" \ |
||||
|
"from ntfy server. Received:" >&2 |
||||
|
printf "%s" "$cred" | sed -r 's/^/ | /g' >&2 |
||||
|
exit 1 |
||||
|
fi |
||||
|
if [[ "${!var}" == *%* ]]; then |
||||
|
## We need a separator char for sed replacement in the config file |
||||
|
echo "Error: forbidden character '%' found in $var" >&2 |
||||
|
exit 1 |
||||
|
fi |
||||
|
if grep -qE "^${var^^}=" "$config_file"; then |
||||
|
sed -ri "s%^${var^^}=.*$%${var^^}=\"${!var}\"%g" "$config_file" |
||||
|
else |
||||
|
echo "${var^^}=\"${!var}\"" >> "$config_file" |
||||
|
fi |
||||
|
done |
||||
|
|
||||
|
|
||||
|
if ! [ -f "/etc/ntfy/topics.yml" ]; then |
||||
|
cat <<'EOF' > /etc/ntfy/topics.yml |
||||
|
.*\.(emerg|alert|crit|err|warning|notice): |
||||
|
- ${LOGIN}_main |
||||
|
EOF |
||||
|
fi |
||||
|
|
||||
|
|
||||
|
## provide 'send' command |
||||
|
|
||||
|
cp -f "$PWD/src/bin/send" /usr/local/bin/send |
@ -0,0 +1,166 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
## Send a notification with NTFY and check if the config file is complete |
||||
|
|
||||
|
if [[ "$UID" == "0" ]]; then |
||||
|
NTFY_CONFIG_DIR="${NTFY_CONFIG_DIR:-/etc/ntfy}" |
||||
|
else |
||||
|
NTFY_CONFIG_DIR="${NTFY_CONFIG_DIR:-~/.config/ntfy}" |
||||
|
fi |
||||
|
NTFY_CONFIG_FILE="$NTFY_CONFIG_DIR/ntfy.conf" |
||||
|
|
||||
|
SERVER="https://ntfy.0k.io" |
||||
|
|
||||
|
[ -f "$NTFY_CONFIG_DIR/topics.yml" ] || { |
||||
|
echo "Error: no 'topics.yml' file found in $NTFY_CONFIG_DIR" >&2 |
||||
|
echo " Please setup the topics for the notification channels in this file." >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
if ! [ -e "$NTFY_CONFIG_FILE" ]; then |
||||
|
mkdir -p "${NTFY_CONFIG_FILE%/*}" |
||||
|
## default option to change if needed |
||||
|
echo "SERVER=$SERVER" > "$NTFY_CONFIG_FILE" |
||||
|
elif ! grep -q "^SERVER=" "$NTFY_CONFIG_FILE"; then |
||||
|
echo "SERVER=$SERVER" >> "$NTFY_CONFIG_FILE" |
||||
|
fi |
||||
|
|
||||
|
source "$NTFY_CONFIG_FILE" || { |
||||
|
echo "Error: could not source '$NTFY_CONFIG_FILE'" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
SERVER="${SERVER%/}" |
||||
|
|
||||
|
for var in SERVER LOGIN PASSWORD; do |
||||
|
if ! [ -v "$var" ]; then |
||||
|
echo "Error: missing $var in $NTFY_CONFIG_FILE" |
||||
|
exit 1 |
||||
|
fi |
||||
|
done |
||||
|
|
||||
|
|
||||
|
exname=${0##*/} |
||||
|
channels=() |
||||
|
|
||||
|
usage="$exname [options] MESSAGE" |
||||
|
help="\ |
||||
|
|
||||
|
Send MESSAGE with TITLE to the differents topics defined by a CHANNEL |
||||
|
|
||||
|
$exname will read the $NTFY_CONFIG_DIR/topics.yml for channel to |
||||
|
topics conversion. |
||||
|
|
||||
|
|
||||
|
Usage: |
||||
|
$usage |
||||
|
|
||||
|
Options: |
||||
|
-c CHANNEL Specify one or multiple channels. Default 'main'. |
||||
|
(can be provided mulitiple time) |
||||
|
-t TITLE Specify the title of the message. (it'll still be |
||||
|
prefixed with the hostname) |
||||
|
Default is empty |
||||
|
MESSAGE message to send. |
||||
|
-h Display this help and exit. |
||||
|
|
||||
|
" |
||||
|
|
||||
|
while [ "$#" -gt 0 ]; do |
||||
|
arg="$1" |
||||
|
shift |
||||
|
case "$arg" in |
||||
|
-h|--help) |
||||
|
echo "$help" |
||||
|
exit 0 |
||||
|
;; |
||||
|
-c|--channel) |
||||
|
[ -n "$1" ] || { |
||||
|
echo "Error: no argument for channel option." >&2 |
||||
|
echo "$usage" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
IFS=", " channels+=($1) |
||||
|
shift |
||||
|
;; |
||||
|
-t|--title) |
||||
|
[ -n "$1" ] || { |
||||
|
echo "Error: no argument for title option." >&2 |
||||
|
echo "$usage" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
title="$1" |
||||
|
shift |
||||
|
;; |
||||
|
*) |
||||
|
[ -z "$message" ] && { message="$arg"; continue; } |
||||
|
echo "Error : Unexpected positional argument '$arg'." >&2 |
||||
|
echo "$usage" >&2 |
||||
|
exit 1 |
||||
|
;; |
||||
|
esac |
||||
|
done |
||||
|
|
||||
|
[ -n "$message" ] || { |
||||
|
echo "Error: missing message." >&2 |
||||
|
echo "$usage" >&2 |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
read-0() { |
||||
|
local eof='' IFS='' |
||||
|
while [ "$1" ]; do |
||||
|
read -r -d '' -- "$1" || eof=1 |
||||
|
shift |
||||
|
done |
||||
|
[ -z "$eof" ] |
||||
|
} |
||||
|
|
||||
|
curl_opts=( |
||||
|
-s |
||||
|
-u "$LOGIN:$PASSWORD" |
||||
|
-d "$message" |
||||
|
) |
||||
|
|
||||
|
title="[$(hostname)] $title" |
||||
|
title="${title%%+([[:space:]])}" |
||||
|
curl_opts+=(-H "Title: $title") |
||||
|
|
||||
|
declare -A sent_topic=() |
||||
|
|
||||
|
if [ "${#channels[@]}" -eq 0 ]; then |
||||
|
channels=("main") |
||||
|
fi |
||||
|
|
||||
|
for channel in "${channels[@]}"; do |
||||
|
channel_quoted=$(printf "%q" "$channel") |
||||
|
content=$(cat "$NTFY_CONFIG_DIR/topics.yml") |
||||
|
while read-0 channel_regex topics; do |
||||
|
[[ "$channel" =~ ^$channel_regex$ ]] || continue |
||||
|
rematch=("${BASH_REMATCH[@]}") |
||||
|
while read-0 topic; do |
||||
|
ttopic=$(printf "%s" "$topic" | yq "type") |
||||
|
if [ "$ttopic" != '!!str' ]; then |
||||
|
echo "Error: Unexpected '$ttopic' type for value of channel $channel." >&2 |
||||
|
exit 1 |
||||
|
fi |
||||
|
topic=$(printf "%s" "$topic" | yq -r " \"\" + .") |
||||
|
if ! [[ "$topic" =~ ^[a-zA-Z0-9\$\{\}*\ \,_.-]+$ ]]; then |
||||
|
echo "Error: Invalid topic value '$topic' expression in $channel channel." >&2 |
||||
|
exit 1 |
||||
|
fi |
||||
|
new_topics=($(set -- "${rematch[@]}"; eval echo "${topic//\*/\\*}")) |
||||
|
for new_topic in "${new_topics[@]}"; do |
||||
|
[ -n "${sent_topic["$new_topic"]}" ] && continue |
||||
|
sent_topic["$new_topic"]=1 |
||||
|
if ! out=$(curl "${curl_opts[@]}" "$SERVER/${new_topic}"); then |
||||
|
echo "Error: could not send message to $new_topic." >&2 |
||||
|
echo "curl command:" >&2 |
||||
|
echo " curl ${curl_opts[@]} $SERVER/${new_topic}" >&2 |
||||
|
echo "$out" | sed 's/^/ | /' >&2 |
||||
|
exit 1 |
||||
|
fi |
||||
|
done |
||||
|
done < <(printf "%s" "$topics" | yq e -0 '.[]') |
||||
|
done < <(printf "%s" "$content" | yq e -0 'to_entries | .[] | [.key, .value] |.[]') |
||||
|
done |
@ -1,44 +0,0 @@ |
|||||
#!/bin/bash |
|
||||
|
|
||||
## When writing relation script, remember: |
|
||||
## - they should be idempotents |
|
||||
## - they can be launched while the dockers is already up |
|
||||
## - they are launched from the host |
|
||||
## - the target of the link is launched first, and get a chance to ``relation-set`` |
|
||||
## - both side of the scripts get to use ``relation-get``. |
|
||||
|
|
||||
. lib/common |
|
||||
|
|
||||
set -e |
|
||||
|
|
||||
## XXXvlab: should use container name here so that it could support |
|
||||
## multiple postgres |
|
||||
label=${SERVICE_NAME} |
|
||||
DST=$CONFIGSTORE/$TARGET_SERVICE_NAME/etc/cron/$label |
|
||||
|
|
||||
## XXXvlab: Should we do a 'docker exec' instead ? |
|
||||
bin_console="dc run -u www-data --rm --entrypoint \\\"$GOGOCARTO_DIR/bin/console\\\" $MASTER_BASE_SERVICE_NAME" |
|
||||
|
|
||||
## Warning: 'docker -v' will use HOST directory even if launched from |
|
||||
## 'cron' container. |
|
||||
file_put "$DST" <<EOF |
|
||||
@daily root lock ${label}-checkvote -D -p 10 -c "\ |
|
||||
$bin_console app:elements:checkvote" 2>&1 | ts '\%F \%T \%Z' >> /var/log/cron/${SERVICE_NAME}-checkvote_script.log |
|
||||
|
|
||||
@daily root lock ${label}-checkExternalSourceToUpdate -D -p 10 -c "\ |
|
||||
$bin_console app:elements:checkExternalSourceToUpdate" 2>&1 | ts '\%F \%T \%Z' >> /var/log/cron/${SERVICE_NAME}-checkExternalSourceToUpdate_script.log |
|
||||
|
|
||||
@daily root lock ${label}-notify-moderation -D -p 10 -c "\ |
|
||||
$bin_console app:notify-moderation" 2>&1 | ts '\%F \%T \%Z' >> /var/log/cron/${SERVICE_NAME}-notify-moderation_script.log |
|
||||
|
|
||||
|
|
||||
@hourly root lock ${label}-sendNewsletter -D -p 10 -c "\ |
|
||||
$bin_console app:users:sendNewsletter" 2>&1 | ts '\%F \%T \%Z' >> /var/log/cron/${SERVICE_NAME}-sendNewsletter_script.log |
|
||||
|
|
||||
|
|
||||
*/5 * * * * root lock ${label}-webhooks-post -D -p 10 -c "\ |
|
||||
$bin_console --env=prod app:webhooks:post" 2>&1 | ts '\%F \%T \%Z' >> /var/log/cron/${SERVICE_NAME}-webhooks-post_script.log |
|
||||
|
|
||||
|
|
||||
EOF |
|
||||
chmod +x "$DST" |
|
@ -0,0 +1,22 @@ |
|||||
|
# -*- ispell-local-dictionary: "english" -*- |
||||
|
|
||||
|
* Usage |
||||
|
|
||||
|
** How to manage users |
||||
|
|
||||
|
This allows to reset password for users. |
||||
|
|
||||
|
From container (using `docker exec -ti MY_CONTAINER sh`), use |
||||
|
`/hedgedoc/bin/manage_users` command (it is not in $PATH). |
||||
|
|
||||
|
#+begin_example |
||||
|
Command-line utility to create users for email-signin. |
||||
|
|
||||
|
Usage: bin/manage_users [--pass password] (--add | --del) user-email |
||||
|
Options: |
||||
|
--add Add user with the specified user-email |
||||
|
--del Delete user with specified user-email |
||||
|
--reset Reset user password with specified user-email |
||||
|
--pass Use password from cmdline rather than prompting |
||||
|
#+end_example |
||||
|
|
@ -1,34 +0,0 @@ |
|||||
#!/bin/bash |
|
||||
|
|
||||
## When writing relation script, remember: |
|
||||
## - they should be idempotents |
|
||||
## - they can be launched while the dockers is already up |
|
||||
## - they are launched from the host |
|
||||
## - the target of the link is launched first, and get a chance to ``relation-set`` |
|
||||
## - both side of the scripts get to use ``relation-get``. |
|
||||
|
|
||||
set -e |
|
||||
|
|
||||
label=${SERVICE_NAME}-renew |
|
||||
DST=$CONFIGSTORE/$TARGET_SERVICE_NAME/etc/cron/$label |
|
||||
LOCAL_LOG=/var/log/cron/${label}_script.log |
|
||||
schedule=$(relation-get schedule) |
|
||||
|
|
||||
if ! echo "$schedule" | egrep '^\s*(([0-9/,*-]+\s+){4,4}[0-9/,*-]+|@[a-z]+)\s*$' >/dev/null 2>&1; then |
|
||||
err "Unrecognized schedule '$schedule'." |
|
||||
exit 1 |
|
||||
fi |
|
||||
|
|
||||
## Warning: using '\' in heredoc will be removed in the final cron file, which |
|
||||
## is totally wanted: cron does not support multilines. |
|
||||
|
|
||||
## Warning: 'docker -v' will use HOST directory even if launched from |
|
||||
## 'cron' container. |
|
||||
file_put "$DST" <<EOF |
|
||||
COMPOSE_LAUNCHER_OPTS=$COMPOSE_LAUNCHER_OPTS |
|
||||
|
|
||||
$schedule root lock $label -D -p 10 -c "\ |
|
||||
compose crt $SERVICE_NAME renew" 2>&1 | ts '\%F \%T \%Z' >> $LOCAL_LOG |
|
||||
|
|
||||
EOF |
|
||||
chmod +x "$DST" |
|
@ -1,30 +0,0 @@ |
|||||
#!/bin/bash |
|
||||
|
|
||||
## When writing relation script, remember: |
|
||||
## - they should be idempotents |
|
||||
## - they can be launched while the dockers is already up |
|
||||
## - they are launched from the host |
|
||||
## - the target of the link is launched first, and get a chance to ``relation-set`` |
|
||||
## - both side of the scripts get to use ``relation-get``. |
|
||||
|
|
||||
set -e |
|
||||
|
|
||||
label=launch-$SERVICE_NAME |
|
||||
DST=$CONFIGSTORE/$TARGET_SERVICE_NAME/etc/cron/$label |
|
||||
schedule=$(relation-get schedule) || true |
|
||||
|
|
||||
if ! echo "$schedule" | egrep '^\s*(([0-9/,*-]+\s+){4,4}[0-9/,*-]+|@[a-z]+)\s*$' >/dev/null 2>&1; then |
|
||||
err "Unrecognized schedule '$schedule'." |
|
||||
exit 1 |
|
||||
fi |
|
||||
|
|
||||
## Warning: using '\' in heredoc will be removed in the final cron file, which |
|
||||
## is totally wanted: cron does not support multilines. |
|
||||
|
|
||||
## Warning: 'docker -v' will use HOST directory even if launched from |
|
||||
## 'cron' container. |
|
||||
file_put "$DST" <<EOF |
|
||||
$schedule root lock $label -D -p 10 -c "\ |
|
||||
dc run --rm $SERVICE_NAME" 2>&1 | ts '\%F \%T \%Z' >> /var/log/cron/${label}_script.log |
|
||||
EOF |
|
||||
chmod +x "$DST" |
|
@ -0,0 +1,15 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
## When writing relation script, remember: |
||||
|
## - they should be idempotents |
||||
|
## - they can be launched while the dockers is already up |
||||
|
## - they are launched from the host |
||||
|
## - the target of the link is launched first, and get a chance to ``relation-set`` |
||||
|
## - both side of the scripts get to use ``relation-get``. |
||||
|
|
||||
|
relation-set type mysql || { |
||||
|
err "Could not set relation ${WHITE}type${NORMAL} to 'mysql'." |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
. hooks/mysql_database-relation-joined |
@ -1,51 +0,0 @@ |
|||||
#!/bin/bash |
|
||||
|
|
||||
. lib/common |
|
||||
|
|
||||
set -e |
|
||||
|
|
||||
PASSWORD="$(relation-get password)" |
|
||||
USER="$(relation-get user)" |
|
||||
DBNAME="$(relation-get dbname)" |
|
||||
|
|
||||
|
|
||||
## This check adds purely arbitrary limits to what could be a password |
|
||||
## if we need to open that more, just consider the next script where we'll |
|
||||
## need to write in a PHP structure, or in YAML structure. |
|
||||
|
|
||||
## Note that here, "[]" chars are not accepted just because it doesn't seem evident |
|
||||
## to test for those in bash. |
|
||||
if ! [[ "$PASSWORD" =~ ^[a-zA-Z0-9~\`\&+=@\#^\*/\\_%\$:\;\!?.,\<\>{}()\"\'|-]*$ ]]; then |
|
||||
err "Invalid password chosen for mysql database." |
|
||||
exit 1 |
|
||||
fi |
|
||||
|
|
||||
## if config is not existent |
|
||||
if [ -e "$CONFIGFILE" ] && grep "^ 'dbuser' => '" "$CONFIGFILE" >/dev/null; then |
|
||||
|
|
||||
## 'occ' can't be used as it will try to connect to mysql before running and |
|
||||
## will fail if user/password is not correct |
|
||||
|
|
||||
## We need to get through bash, and sed interpretation, then PHP single quoted strings. |
|
||||
quoted_user="${USER//\\/\\\\\\\\\\}" |
|
||||
quoted_user="${quoted_user//\'/\\\\\'}" |
|
||||
quoted_password="${PASSWORD//\\/\\\\\\\\\\}" |
|
||||
quoted_password="${quoted_password//\'/\\\\\'}" |
|
||||
sed -ri "s/^( 'dbuser' => ')(.*)(',)$/\1${quoted_user}\3/g;\ |
|
||||
s/^( 'dbpassword' => ')(.*)(',)$/\1${quoted_password}\3/g;" "$CONFIGFILE" |
|
||||
else |
|
||||
|
|
||||
## These variable are not used by current docker image after first install |
|
||||
|
|
||||
config-add "\ |
|
||||
services: |
|
||||
$MASTER_BASE_SERVICE_NAME: |
|
||||
environment: |
|
||||
MYSQL_HOST: $MASTER_TARGET_SERVICE_NAME |
|
||||
MYSQL_DATABASE: $DBNAME |
|
||||
MYSQL_PASSWORD: $PASSWORD |
|
||||
MYSQL_USER: $USER |
|
||||
" |
|
||||
fi |
|
||||
|
|
||||
info "Configured $SERVICE_NAME code for $TARGET_SERVICE_NAME access." |
|
@ -0,0 +1 @@ |
|||||
|
postgres_database-relation-joined |
@ -1,51 +1,11 @@ |
|||||
#!/bin/bash |
#!/bin/bash |
||||
|
|
||||
. lib/common |
|
||||
|
type="${0##*/}" |
||||
|
type="${type%_database-relation-joined}" |
||||
|
|
||||
set -e |
|
||||
|
|
||||
PASSWORD="$(relation-get password)" |
|
||||
USER="$(relation-get user)" |
|
||||
DBNAME="$(relation-get dbname)" |
|
||||
|
|
||||
|
|
||||
## This check adds purely arbitrary limits to what could be a password |
|
||||
## if we need to open that more, just consider the next script where we'll |
|
||||
## need to write in a PHP structure, or in YAML structure. |
|
||||
|
|
||||
## Note that here, "[]" chars are not accepted just because it doesn't seem evident |
|
||||
## to test for those in bash. |
|
||||
if ! [[ "$PASSWORD" =~ ^[a-zA-Z0-9~\`\&+=@\#^\*/\\_%\$:\;\!?.,\<\>{}()\"\'|-]*$ ]]; then |
|
||||
err "Invalid password chosen for postgres database." |
|
||||
|
set-relation type "$type" || { |
||||
|
err "Could not set relation ${WHITE}type${NORMAL} to '$type'." |
||||
exit 1 |
exit 1 |
||||
fi |
|
||||
|
|
||||
## if config is not existent |
|
||||
if [ -e "$CONFIGFILE" ] && grep "^ 'dbuser' => '" "$CONFIGFILE" >/dev/null; then |
|
||||
|
|
||||
## 'occ' can't be used as it will try to connect to postgres before running and |
|
||||
## will fail if user/password is not correct |
|
||||
|
|
||||
## We need to get through bash, and sed interpretation, then PHP single quoted strings. |
|
||||
quoted_user="${USER//\\/\\\\\\\\\\}" |
|
||||
quoted_user="${quoted_user//\'/\\\\\'}" |
|
||||
quoted_password="${PASSWORD//\\/\\\\\\\\\\}" |
|
||||
quoted_password="${quoted_password//\'/\\\\\'}" |
|
||||
sed -ri "s/^( 'dbuser' => ')(.*)(',)$/\1${quoted_user}\3/g;\ |
|
||||
s/^( 'dbpassword' => ')(.*)(',)$/\1${quoted_password}\3/g;" "$CONFIGFILE" |
|
||||
else |
|
||||
|
|
||||
## These variable are not used by current docker image after first install |
|
||||
|
|
||||
config-add "\ |
|
||||
services: |
|
||||
$MASTER_BASE_SERVICE_NAME: |
|
||||
environment: |
|
||||
POSTGRES_HOST: $MASTER_TARGET_SERVICE_NAME |
|
||||
POSTGRES_DB: $DBNAME |
|
||||
POSTGRES_PASSWORD: $PASSWORD |
|
||||
POSTGRES_USER: $USER |
|
||||
" |
|
||||
fi |
|
||||
|
} |
||||
|
|
||||
info "Configured $SERVICE_NAME code for $TARGET_SERVICE_NAME access." |
|
||||
|
. ./hooks/sql_database-relation-joined |
@ -0,0 +1,75 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
. lib/common |
||||
|
|
||||
|
set -e |
||||
|
TYPE="$(relation-get type)" || { |
||||
|
err "No ${WHITE}type${NORMAL} set in relation." |
||||
|
exit 1 |
||||
|
} |
||||
|
PASSWORD="$(relation-get password)" |
||||
|
USER="$(relation-get user)" |
||||
|
DBNAME="$(relation-get dbname)" |
||||
|
|
||||
|
|
||||
|
## This check adds purely arbitrary limits to what could be a password |
||||
|
## if we need to open that more, just consider the next script where we'll |
||||
|
## need to write in a PHP structure, or in YAML structure. |
||||
|
|
||||
|
## Note that here, "[]" chars are not accepted just because it doesn't seem evident |
||||
|
## to test for those in bash. |
||||
|
if ! [[ "$PASSWORD" =~ ^[a-zA-Z0-9~\`\&+=@\#^\*/\\_%\$:\;\!?.,\<\>{}()\"\'|-]*$ ]]; then |
||||
|
err "Invalid password chosen for $type database." |
||||
|
exit 1 |
||||
|
fi |
||||
|
|
||||
|
## if config is not existent |
||||
|
if [ -e "$CONFIGFILE" ] && grep "^ 'dbuser' => '" "$CONFIGFILE" >/dev/null; then |
||||
|
|
||||
|
## 'occ' can't be used as it will try to connect to db before running and |
||||
|
## will fail if user/password is not correct |
||||
|
|
||||
|
## We need to get through bash, and sed interpretation, then PHP single quoted strings. |
||||
|
quoted_user="${USER//\\/\\\\\\\\\\}" |
||||
|
quoted_user="${quoted_user//\'/\\\\\'}" |
||||
|
quoted_password="${PASSWORD//\\/\\\\\\\\\\}" |
||||
|
quoted_password="${quoted_password//\'/\\\\\'}" |
||||
|
case "$TYPE" in |
||||
|
mysql) |
||||
|
nextcloud_type="mysql";; |
||||
|
postgres) |
||||
|
nextcloud_type="pgsql";; |
||||
|
*) |
||||
|
err "Unknown type '$TYPE' for database." |
||||
|
exit 1 |
||||
|
;; |
||||
|
esac |
||||
|
|
||||
|
sed -ri "s/^( 'dbuser' => ')(.*)(',)$/\1${quoted_user}\3/g;\ |
||||
|
s/^( 'dbpassword' => ')(.*)(',)$/\1${quoted_password}\3/g;\ |
||||
|
s/^( 'dbtype' => ')(.*)(',)$/\1${nextcloud_type}\3/g;\ |
||||
|
s/^( 'dbhost' => ')(.*)(',)$/\1${MASTER_TARGET_SERVICE_NAME}\3/g;\ |
||||
|
" "$CONFIGFILE" |
||||
|
|
||||
|
else |
||||
|
|
||||
|
## These variable are not used by current docker image after first install |
||||
|
|
||||
|
if [ "$TYPE" == "mysql" ]; then |
||||
|
database_env_label="DATABASE" |
||||
|
else |
||||
|
database_env_label="DB" |
||||
|
fi |
||||
|
|
||||
|
config-add "\ |
||||
|
services: |
||||
|
$MASTER_BASE_SERVICE_NAME: |
||||
|
environment: |
||||
|
${TYPE^^}_HOST: $MASTER_TARGET_SERVICE_NAME |
||||
|
${TYPE^^}_${database_env_label}: $DBNAME |
||||
|
${TYPE^^}_PASSWORD: $PASSWORD |
||||
|
${TYPE^^}_USER: $USER |
||||
|
" |
||||
|
fi |
||||
|
|
||||
|
info "Configured $SERVICE_NAME code for $TARGET_SERVICE_NAME access." |
@ -1,33 +1,20 @@ |
|||||
#!/bin/bash |
#!/bin/bash |
||||
|
|
||||
|
. lib/common |
||||
|
|
||||
set -e |
set -e |
||||
|
|
||||
DOMAIN=$(relation-get domain) || exit 1 |
DOMAIN=$(relation-get domain) || exit 1 |
||||
URL="$(relation-get url)" || exit 1 |
URL="$(relation-get url)" || exit 1 |
||||
PROTO="${URL%%://*}" |
PROTO="${URL%%://*}" |
||||
|
|
||||
if ! trusted_domains="$( |
|
||||
compose -q --no-relations --no-init occ "$MASTER_BASE_SERVICE_NAME" \ |
|
||||
config:system:get trusted_domains)"; then |
|
||||
err "Couldn't get 'trusted_domains'. Here's the ouput:" |
|
||||
echo "$trusted_domains" | prefix " | " >&2 |
|
||||
|
|
||||
echo "If the code of nextcloud is already there (command occ is found), but " >&2 |
|
||||
echo "the database is not yet created, this situation will arise." >&2 |
|
||||
|
nextcloud:config:simple:add overwritehost "$DOMAIN" || { |
||||
|
err "Failed to set ${WHITE}overwritehost${NORMAL} to '$DOMAIN'." |
||||
exit 1 |
exit 1 |
||||
fi |
|
||||
|
} |
||||
|
|
||||
occ_opts=( |
|
||||
## necessary as nextcloud do not detect correctly those, and behind |
|
||||
## a proxy, it will generate a lot of URL that are not detected |
|
||||
## by means of ``ReverseProxyPass`` on apache for instance |
|
||||
|
nextcloud:config:simple:add overwriteprotocol "$PROTO" || { |
||||
|
err "Failed to set ${WHITE}overwriteprotocol${NORMAL} to '$PROTO'." |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
config:system:set overwritehost --value="$DOMAIN" \; |
|
||||
config:system:set overwriteprotocol --value="$PROTO" |
|
||||
) |
|
||||
if ! [[ $'\n'"$trusted_domains"$'\n' == *$'\n'"$MASTER_BASE_SERVICE_NAME"$'\n'* ]]; then |
|
||||
trusted_index=$(echo "$trusted_domains" | wc -l) |
|
||||
debug "Adding $MASTER_TARGET_SERVICE_NAME to ${WHITE}trusted_domains${NORMAL}." |
|
||||
occ_opts+=( \; config:system:set trusted_domains "$trusted_index" --value="$MASTER_BASE_SERVICE_NAME") |
|
||||
fi |
|
||||
compose --no-relations --no-init occ "$MASTER_BASE_SERVICE_NAME" "${occ_opts[@]}" |
|
@ -0,0 +1,40 @@ |
|||||
|
|
||||
|
|
||||
|
Odoo-tecnativa is a odoo image containing all source and add-ons because |
||||
|
we want to certify the whole image. |
||||
|
|
||||
|
So this means there are no builds being managed by compose, and no injection |
||||
|
of code. |
||||
|
|
||||
|
|
||||
|
* Usage |
||||
|
|
||||
|
** dbfilter |
||||
|
|
||||
|
With image ~16.0~, an advanced version of ~dbfilter~ is installed. Here |
||||
|
a few examples: |
||||
|
|
||||
|
#+begin_src yaml |
||||
|
odoo: |
||||
|
# .. |
||||
|
options: |
||||
|
dbfilter: |
||||
|
## DOMAIN_REGEX: DBFILTER |
||||
|
'^www.domain.org$': '^bar$' ## domain `www.domain.org` can only see `bar`. |
||||
|
'^foo\.': 'foo_.*' ## domain starting with `foo.` can see db `foo_` |
||||
|
'^(?P<name>[^.]+)\.': '%{name}s_.*' ## domain starting with `<PREFIX>.` can see db `PREFIX_` |
||||
|
'': 'other_.*' ## all domains can see db 'other_*' |
||||
|
|
||||
|
## Don't forget to configure the domains in the web-proxy part ! |
||||
|
relations: |
||||
|
web-proxy: |
||||
|
apache: |
||||
|
domain: www.domain.org |
||||
|
aliases: |
||||
|
- foo.otherdomain.com |
||||
|
- bar.wiz.eu |
||||
|
- test.domain.org |
||||
|
#+end_src |
||||
|
|
||||
|
If there's only one database seen because of the ~dbfilter~, odoo will |
||||
|
use it by default. |
@ -1,8 +0,0 @@ |
|||||
|
|
||||
|
|
||||
Odoo-tecnativa is a odoo image containing all source and add-ons because |
|
||||
we want to certify the whole image. |
|
||||
|
|
||||
So this means there are no builds being managed by compose, and no injection |
|
||||
of code. |
|
||||
|
|
@ -0,0 +1,16 @@ |
|||||
|
#!/bin/bash |
||||
|
|
||||
|
## When writing relation script, remember: |
||||
|
## - they should be idempotents |
||||
|
## - they can be launched while the dockers is already up |
||||
|
## - they are launched from the host |
||||
|
## - the target of the link is launched first, and get a chance to ``relation-set`` |
||||
|
## - both side of the scripts get to use ``relation-get``. |
||||
|
|
||||
|
relation-set type postgres || { |
||||
|
err "Could not set relation ${WHITE}type${NORMAL} to 'postgres'." |
||||
|
exit 1 |
||||
|
} |
||||
|
|
||||
|
. hooks/postgres_database-relation-joined |
||||
|
|
Write
Preview
Loading…
Cancel
Save
Reference in new issue