Skip to content

Getting Started

The quickest way to get started is by configuring LoggiFly with environment variables only, but for full flexibility and feature access, using a config.yaml file is recommended.

The following section will provide a quick start with minimal configuration. For more features and customization options, start here to learn more about how to configure LoggiFly.

Notification Services

You can directly send notifications to ntfy and change topic, tags, priority, etc.

You can also send notifications to most other notification services via Apprise. Just follow their docs on how to best configure the Apprise URL for your notification service.

Configuration

The following docker compose examples presume that you are using a config.yaml file. If don't want to use a config file, you can comment out the config.yaml mount and use environment variables only.

INFO

Environment variables allow for a simple and much quicker setup but they don't support configuring different keywords per container or features like regex, container actions, message formatting and more. With a config.yaml file you have access to all features and can apply settings at multiple levels: globally, per log source, per rule and per trigger, allowing for much more fine-grained control.

Environment Variables

Here are some environment variables to give you a quick start without having to create a config.yaml file. Just edit and paste them into the environment section of your docker compose file.

Environment Variables
yaml
    environment:
      # Choose at least one notification service
      NTFY_URL: "https://ntfy.sh"
      NTFY_TOPIC: "your_topic"
      # ntfy token or username + password for authentication
      NTFY_TOKEN: <token>
      NTFY_USERNAME: <username>
      NTFY_PASSWORD: <password>
      APPRISE_URL: "discord://..."        # Apprise-compatible URL

      CONTAINERS: "vaultwarden,audiobookshelf"   # comma-separated container names to monitor
      GLOBAL_KEYWORDS: "error,failed login"  # keywords applied to all monitored containers

config.yaml

Tips

Here is a very minimal config that you can edit and paste into a newly created config.yaml file in the mounted /config directory:

config.yaml
yaml
# Minimal LoggiFly config. Edit and save to /config/config.yaml
#
# For all available options see:
# https://clemcer.github.io/loggifly/guide/config/

global:
  # keywords here apply to every matched target
  keywords:
    - fatal
    - keyword: panic
      attach_logfile: true


containers:
  rules:
    - container_name: my-container
      keywords:
        - error
        - regex: "(username|password).*incorrect"

    - container_name: "*-db" # matches any container whose name ends with "-db"
      keywords:
        - keyword: critical
    
    # You can also use the full matching syntax instead of the shorthands used above
    - match:
        include:
          container_names:
            - "webapp*"
            - "nginx"
        exclude:
          container_names:
            - "*-test"
            - "tmp-*"
      keywords:
         - keyword: "timeout"


notifications:
  # Configure at least one notification service
  ntfy:
    url: "http://your-ntfy-server"
    topic: "loggifly"
    token: "ntfy-token"           # token auth (or use username + password below) if needed
    # username: john
    # password: secret
  apprise:
    url: "discord://webhook-url"  # any Apprise-compatible URL

Docker Compose

For better security, it is best practice to use a Docker Socket Proxy when exposing the docker socket to an application. Below are some compose examples with two different socket proxies.

If you don't want to use a socket proxy, maybe because you want to use the container actions feature, you can also just use the provided compose file with direct docker socket access.

yaml
services:
  loggifly:
    image: ghcr.io/clemcer/loggifly:v2
    container_name: loggifly
    read_only: true
    environment:
      # Configure Environment Variables here if you don't want to use a config.yaml file
      TZ: Europe/Berlin
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./config:/config    # Place your config.yaml file in this directory
    restart: unless-stopped
yaml
services:
  loggifly:
    image: ghcr.io/clemcer/loggifly:v2
    container_name: loggifly 
    # It is recommended to set the user so that the container does not run as root
    user: 1000:1000
    read_only: true
    volumes:
     # Place your config.yaml here if you are using one
      - ./config:/config
    environment:
      TZ: Europe/Berlin
      DOCKER_HOST: tcp://socket-proxy:2375
    depends_on:
      - socket-proxy
    restart: unless-stopped

  socket-proxy:
    image: tecnativa/docker-socket-proxy
    container_name: docker-socket-proxy
    environment:
      - CONTAINERS=1  
      - POST=0       
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro  
    restart: unless-stopped
yaml
services:
  loggifly:
    image: ghcr.io/clemcer/loggifly:v2
    container_name: loggifly
    # It is recommended to set the user so that the container does not run as root
    user: 1000:1000
    read_only: true
    volumes:
      - socket-proxy:/var/run
      # Place your config.yaml here if you are using one
      - ./config:/config
    depends_on:
      - socket-proxy
    restart: unless-stopped 

  socket-proxy:
    image: "11notes/socket-proxy:2"
    read_only: true
    # Make sure to use the same UID/GID as the owner of your docker socket. 
    # You can check with: `ls -n /var/run/docker.sock`
    user: "0:996"
    volumes:
      - "/run/docker.sock:/run/docker.sock:ro"
      - "socket-proxy:/run/proxy"
    restart: "always"

volumes: 
  socket-proxy:

Strict Config Validation

By default, unknown or misspelled field names in your config.yaml will cause LoggiFly to refuse to start with a validation error. This is to ensure no configuration of yours gets ignored without you noticing.

If this bothers you, you can set STRICT_CONFIG=false in your compose file to ignore invalid fields (for the most part) and only log a warning:

yaml
environment:
  STRICT_CONFIG: "false"