You could use grafana loki to handle logs, it's similar to Prometheus so if you're already using that and/or grafana it's an easy setup and the API is really simple too.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I second this. Loki for logs, VictoriaMetrics for metrics―it's significantly more lightweight than ELK logging (and any lags are irrelevant for a homelab), and VM is similarly much more careful with RAM than Prometheus.
I did stumble on Grafana Loki in my search. Was trying to figure out if it was overkill. Is it fairly lightweight?
Much less intensive than an elasticsearch anything. I have Loki, grafana and 3 promtail clients running for my env (switched from Graylog/elasticsearch) and over the last few days Loki is sitting at 3GB memory and 8% CPU processing logs for about 6 devices.
Ok thanks. Looks like I can give it a try in docker.
You can also take a look at open telemetry. It's a huge open source project with lots of functionality. Handles logs just fine and also can provide metrics and traces too. Might be overkill for your needs but it's an excellent tool.
I'll check it out. Thanks.
I use a Graylog/Opensearch/Mongodb stack to log everything. I spent a good amount of time writing parsers for each source, but the benefit is that everything is normalized to make searching easier. I'm happy with it as a solution!
I also use graylog to aggregate logs from various devices (mostly from rsyslog over SSL/TLS). The only downsides for me are the license (not a big problem for a personal setup), and resource usage of the general graylog/elasticsearch stack. I still think it's great.
I use this ansible role to install and manage it.
For simpler setups with resource constraints, I would simply use a rsyslog server as aggregator instead of graylog, and lnav for the analysis/filtering/parsing part
All these new fang projects, but really I just use remote rsyslogd. Works just fine, super robust, easy setup. You can literally be up an.running within minutes.
That's been my go-to in the past but since Debian 12 leaned into journald I was looking into ways to work with that.
It's insane that journald doesn't include a remote option. A feature used in industry for over two decades. 🤦
FWIW I use an elastic stack for that: filebeat, journalbeat to collect logs. Logstash to sort and parse them. Elasticsearch to store them. Not sure if it satisfies your FOSS requirement, as I don't believe it's entirely open source.