• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 27th, 2023

help-circle

  • tko@tkohhh.socialtoSelfhosted@lemmy.worldLogwatch
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    Here you go. I commented out what is not necessary. There are some passwords noted that you’ll want to set to your own values. Also, pay attention to the volume mappings… I left my values in there, but you’ll almost certainly need to change those to make sense for your host system. Hopefully this is helpful!

    services:
      mongodb:
        image: "mongo:6.0"
        volumes:
          - "/mnt/user/appdata/mongo-graylog:/data/db"
    #      - "/mnt/user/backup/mongodb:/backup"
        restart: "on-failure"
    #    logging:
    #      driver: "gelf"
    #      options:
    #        gelf-address: "udp://10.9.8.7:12201"
    #        tag: "mongodb"
    
      opensearch:
        image: "opensearchproject/opensearch:2.13.0"
        environment:
          - "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
          - "bootstrap.memory_lock=true"
          - "discovery.type=single-node"
          - "action.auto_create_index=false"
          - "plugins.security.ssl.http.enabled=false"
          - "plugins.security.disabled=true"
          - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=[yourpasswordhere]"
        ulimits:
          nofile: 64000
          memlock:
            hard: -1
            soft: -1
        volumes:
          - "/mnt/user/appdata/opensearch-graylog:/usr/share/opensearch/data"
        restart: "on-failure"
    #    logging:
    #      driver: "gelf"
    #      options:
    #        gelf-address: "udp://10.9.8.7:12201"
    #        tag: "opensearch"
    
      graylog:
        image: "graylog/graylog:6.2.0"
        depends_on:
          opensearch:
            condition: "service_started"
          mongodb:
            condition: "service_started"
        entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 --  /docker-entrypoint.sh"
        environment:
          GRAYLOG_TIMEZONE: "America/Los_Angeles"
          TZ: "America/Los_Angeles"
          GRAYLOG_ROOT_TIMEZONE: "America/Los_Angeles"
          GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
          GRAYLOG_PASSWORD_SECRET: "[anotherpasswordhere]"
          GRAYLOG_ROOT_PASSWORD_SHA2: "[aSHA2passwordhash]"
          GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
          GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
          GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200/"
          GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
    
        ports:
        - "5044:5044/tcp"   # Beats
        - "5140:5140/udp"   # Syslog
        - "5140:5140/tcp"   # Syslog
        - "5141:5141/udp"   # Syslog - dd-wrt
        - "5555:5555/tcp"   # RAW TCP
        - "5555:5555/udp"   # RAW UDP
        - "9000:9000/tcp"   # Server API
        - "12201:12201/tcp" # GELF TCP
        - "12201:12201/udp" # GELF UDP
        - "10000:10000/tcp" # Custom TCP port
        - "10000:10000/udp" # Custom UDP port
        - "13301:13301/tcp" # Forwarder data
        - "13302:13302/tcp" # Forwarder config
        volumes:
          - "/mnt/user/appdata/graylog/data:/usr/share/graylog/data/data"
          - "/mnt/user/appdata/graylog/journal:/usr/share/graylog/data/journal"
          - "/mnt/user/appdata/graylog/etc:/etc/graylog"
        restart: "on-failure"
    
    volumes:
      mongodb_data:
      os_data:
      graylog_data:
      graylog_journal:
    

  • tko@tkohhh.socialtoSelfhosted@lemmy.worldLogwatch
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    Can you clarify what your concern is with “heavy” logging solutions that require database/elasticsearch? If you’re worried about system resources that’s one thing, but if it’s just that it seems “complicated,” I have a docker compose file that handles Graylog, Opensearch, and Mongodb. Just give it a couple of persistent storage volumes, and it’s good to go. You can send logs directly to it with syslog or gelf, or set up a filebeat container to ingest file logs.

    There’s a LOT you can do with it once you’ve got your logs into the system, but you don’t NEED to do anything else. Just something to consider!


  • tko@tkohhh.socialtoSelfhosted@lemmy.worldVerifying & Validating a Docker Container
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    I’m far from an expert, but it seems to me that if you’re setting up your containers according to best practice you would only be mapping the specific ports needed for the service, which renders a wayward “open port” useless. If there’s some kind of UI exploit, that’s a different story. Perhaps this is why most people suggest not exposing your containerized services to the WAN. If we’re talking about a virus that might affect files, it can only see the files that are mapped to the container which limits the damage that can be done. If you are exposing sensitive files to your container, it might be worth it to vet the container more thoroughly (and make sure you have good backups).



  • tko@tkohhh.socialtoSelfhosted@lemmy.worldVersion Dashboard
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    14 days ago

    I always use a version tag, but I don’t spend any time reading release notes for 95% of my containers. I’ll go through and update versions a couple times a year. If something breaks, at least I know that it broke because I updated it and I can troubleshoot then. The main consideration for me is to not accidentally update and then having a surprise problem to deal with.







  • I think this is just a terminology difference. The documentation says that “Add Ons” are not supported in Container and Core, but “Add Ons” means the easy button you press to install those services. All of those Add On services are just containers that HAOS manages for you. Every single one of them can be set up as a container manually and function the same as the official “Add Ons.”

    I don’t know for sure, but I wonder if the reason for this is that it’s not technically possible for a container to manage other external containers. Does anybody know about this?