Replies: 2 comments 3 replies
-
|
Hey, thanks a ton for the detailed feedback! This is exactly the kind of real-world testing that helps improve the docs. Let me address the points you raised: Re: PUBLIC_API_URL Auto-Detection You're correct the auto-detection logic assumes a standard setup (access via port 3000, API at 8080) and tends to break on orchestrators like Komodo. I'll add a troubleshooting section for this. For now, hardcoding it in your .env is the correct fix: PUBLIC_API_URL=http://your-docker-host:8080 (I'd love to see your Komodo config later if you're up for sharing, it would help me make the detection smarter). Docker Compose Profiles Good catch. Fluent Bit is hidden behind a logging profile so it doesn't start by default for users who don't need syslog. I definitely missed making that clear in the docs. To start it, you can run: # To start with Fluent Bit / syslog support:
docker compose --profile logging up -dDocs vs
Separating Logs by Server Currently, syslog uses the ident field (program name) as the service, falling back to hostname. To separate them cleanly (e.g., Unraid vs Proxmox), you have two main options: Option A: Different ports per server (Cleanest / Recommended) This gives you full isolation. You'd set up two input blocks: # Unraid on port 5514
[INPUT]
Name syslog
Parser syslog-unraid
Listen 0.0.0.0
Port 5514
Mode udp
Tag syslog.unraid
# Proxmox on port 5515
[INPUT]
Name syslog
Parser syslog-rfc3164
Listen 0.0.0.0
Port 5515
Mode udp
Tag syslog.proxmox
```
Option B: Rewrite Tags (If you must use port 514) If you need everything on the standard port, you can use a rewrite_tag filter to route logs based on the hostname:
```ini
[FILTER]
Name rewrite_tag
Match syslog.*
Rule $host ^unraid.* syslog.unraid false
Rule $host ^proxmox.* syslog.proxmox false
[FILTER]
Name lua
Match syslog.unraid
script /fluent-bit/etc/parse_unraid.lua
call parse_unraidIf you share a sample of the Unraid syslog format, I can help you write that custom parser! Unraid Syslog Format If you're seeing parsing issues or wrong timestamps, check the logs (docker compose logs fluent-bit) and try setting SYSLOG_TZ_OFFSET in your .env (e.g., SYSLOG_TZ_OFFSET=1 for CET). Thanks again for taking the time to write this up! I'll be updating the docs based on your feedback this week. Let me know if you hit any other errors in the docs. I'm going to fix the reported one. |
Beta Was this translation helpful? Give feedback.
-
|
@Zxurian Thanks for sharing the Komodo config! That helps understand the setup. The auto-detection assumes direct browser-to-backend connectivity on predictable ports, but with orchestrators like Komodo managing the stack, the hostname about the splitting syslog, yes, exactly you can add multiple [OUTPUT] blocks with different API keys to route to different projects: Then in your .env: in theory the order should matter, more specific matches (syslog.unraid) should come before the catch-all (syslog.*). looking at your log snippet, it's actually valid RFC3164 just without the priority field which some syslog implementations omit. The standard parser should work: Let me know if you run into any other issues |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Saw the 0.40 post on reddit and decided to try centralizing my homelab logs. I'm not an expert by any means, but dangerous enough to lookup and try solving things myself. Got stuck at a few points though. I followed the install docs at https://logtide.dev/docs/getting-started/, then https://logtide.dev/docs/syslog/ and approached it from the "follow the docs to get things running" approach and got 99% of the way there. My environment is structed that all compose stacks are handled by Komodo.
Setup Note 1
Using the basic installation, everything came up, but when trying to access the front end for the first time, I was getting a Failed to Fetch error. After some research and poking, saw the following in the
logtide/docker.env.examplethat is referenced on the installation pageOnly by uncommenting the
PUBLIC_API_URLand setting it to the docker hostname reference was the dashboard signin/signup page able to work (in my case,http://docker-02.mydomain.network:8080). For a "Quick start" impression that the getting-started page gives, might be a good idea to at least list this one setup change if initial deployment fails auto-detection. (Can provide details of my enviornment if you want to diagnose why auto-detection failed)Setup Note 2
While I run docker, I wouldn't say I'm "comfortable" with docker yet. Within the docker compose file, you've got fluent-bit listed with a
profiledeclaration. I wasn't aware of what theprofiledid and originally couldn't figure out why the fluent-bit image wasn't coming up with the full stack. May I suggest adding a tip on the syslog setup page to note that you have to addCOMPOSE_PROFILES=loggingto make sure the fluent image starts?Onto syslog ingestion setup. Per the syslog page, I setup the 4 files with the default data provided.
Setup Note 3
Under the Docker Compose Setup section, it differs from the default service described in however https://github.com/logtide-dev/logtide/blob/main/docker/docker-compose.yml . The differences being that in the
docker-compose.ymlfile, there is an extra lua filewrap_logs.luawhich isn't defined on the syslog setup page, and it also has theprofilesdeclaration, whereas in the syslog setup page, there is noprofilesdelcaration. Should the syslog setup page compose match what's being distributed in the default stack compose?Once fluent-bit was running, I attempted to setup my Unraid server to push syslog to it. First pitfall was nothing was coming through. Looked at fluent-bit logs (via Dozzle) and saw it was repeatedly getting an "authorization failed" when trying to talk to logtide. Looking at compose for fluent-bit, there's a reference to an environment variable
LOGTIDE_API_KEY: ${FLUENT_BIT_API_KEY:-}that isn't mentioned anywhere in the docs. Taking a guess, I created a new Project in Logtide, generated a new API key, then set that env variable with the new API key and redeployed. Now fluent-bit was forwarding logs to logtide. However within the logs, the service was coming across as justsyslog. Given this, it stands to reason that any other server I send syslog from is all going to show up in the same project. Is there a way to have a separate project per server that is sending syslog logs over? or did I miss how to do that and am using the wrong API key.Looking at the
fluent-bit.confit looks like Unraid's syslog messages don't follow the standard RFC3164 format (Example Unraid syslog)Is there a way to adjust the input/filter/parser so that any logs coming from a specific server are sent through a specific regex for parsing?
Beta Was this translation helpful? Give feedback.
All reactions