How much maintenance do you find your self-hosting involves?
I recognize this will vary depending on how *much* you self-host, so I'm curious about the range of experiences from the few self-hosted things to the many self-hosted things. Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

Thinking of building a database of “stuff” that I have at home + some other family households. Multiple accounts with private and shared inventories.
The use case is basically so that all my family members we can check that "John has an old laptop collecting dust" or "Mary has this specific tool that I'd love to use for my current project". It would be awesome if you could also have a private inventory, aside from the "shared knowledge". So, what do you guys use for this? Maybe it does not have to be self hosted, but I have a sense the best solutions for this use case are.

cross-posted from: > ##### HashiCorp joins IBM to accelerate the mission of multi-cloud automation and bring the products to a broader audience of users and customers. > > Today [we announced]( that HashiCorp has signed an agreement to be acquired by IBM to accelerate the multi-cloud automation journey we started almost 12 years ago. I’m hugely excited by this announcement and believe this is an opportunity to further the HashiCorp mission and to expand to a much broader audience with the support of IBM. > > When we started the company in 2012, the cloud landscape was very different than today. Mitchell and I were first exposed to public clouds as hobbyists, experimenting with startup ideas, and later as professional developers building mission-critical applications. That experience made it clear that automation was absolutely necessary for cloud infrastructure to be managed at scale. The transformative impact of the public cloud also made it clear that we would inevitably live in a multi-cloud world. Lastly, it was clear that adoption of this technology would be driven by our fellow practitioners who were reimagining the infrastructure landscape. > > We founded HashiCorp with a mission to enable cloud automation in a multi-cloud world for a community of practitioners. Today, I’m incredibly proud of everything that we have achieved together. Our products are downloaded hundreds of millions of times each year by our passionate community of users. Each year, we certify tens of thousands of new users on our products, who use our tools each and every day to manage their applications and infrastructure. > > We’ve partnered with thousands of customers, including hundreds of the largest organizations in the world, to power their journey to multi-cloud. They have trusted us with their mission-critical applications and core infrastructure. One of the most rewarding aspects of infrastructure is quietly underpinning incredible applications around the world. We are proud to enable [millions of players to game together](, [deliver loyalty points for ordering coffee](, connect self-driving cars, and secure trillions of dollars of transactions daily. This is why we’ve always believed that infrastructure enables innovation. > > The HashiCorp portfolio of products has grown significantly since we started the company. We’ve continued to work with our community and customers to identify their challenges in adopting multi-cloud infrastructure and transitioning to zero trust approaches to security. These challenges have in turn become opportunities for us to build new products and services on top of the HashiCorp Cloud Platform. > > This brings us to why I’m excited about today's announcement. We will continue to build products and services as HashiCorp, and will operate as a division inside IBM Software. By joining IBM, HashiCorp products can be made available to a much larger audience, enabling us to serve many more users and customers. For our customers and partners, this combination will enable us to go further than as a standalone company. > > The community around HashiCorp is what has enabled our success. We will continue to be deeply invested in the community of users and partners who work with HashiCorp today. Further, through the scale of the IBM and Red Hat communities, we plan to significantly broaden our reach and impact. > > While we are more than a decade into HashiCorp, we believe we are still in the early stages of cloud adoption. With IBM, we have the opportunity to help more customers get there faster, to accelerate our product innovation, and to continue to grow our practitioner community. > > I’m deeply appreciative of the support of our users, customers, employees, and partners. It has been an incredibly rewarding journey to build HashiCorp to this point, and I’m looking forward to this next chapter. > > ::: spoiler Additional Information and Where to Find It > HashiCorp, Inc. (“HashiCorp”), the members of HashiCorp’s board of directors and certain of HashiCorp’s executive officers are participants in the solicitation of proxies from stockholders in connection with the pending acquisition of HashiCorp (the “Transaction”). HashiCorp plans to file a proxy statement (the “Transaction Proxy Statement”) with the Securities and Exchange Commission (the “SEC”) in connection with the solicitation of proxies to approve the Transaction. David McJannet, Armon Dadgar, Susan St. Ledger, Todd Ford, David Henshall, Glenn Solomon and Sigal Zarmi, all of whom are members of HashiCorp’s board of directors, and Navam Welihinda, HashiCorp’s chief financial officer, are participants in HashiCorp’s solicitation. Information regarding such participants, including their direct or indirect interests, by security holdings or otherwise, will be included in the Transaction Proxy Statement and other relevant documents to be filed with the SEC in connection with the Transaction. Additional information about such participants is available under the captions “Board of Directors and Corporate Governance,” “Executive Officers” and “Security Ownership of Certain Beneficial Owners and Management” in HashiCorp’s definitive proxy statement in connection with its 2023 Annual Meeting of Stockholders (the “2023 Proxy Statement”), which was filed with the SEC on May 17, 2023 (and is available at To the extent that holdings of HashiCorp’s securities have changed since the amounts printed in the 2023 Proxy Statement, such changes have been or will be reflected on Statements of Change in Ownership on Form 4 filed with the SEC (which are available at Information regarding HashiCorp’s transactions with related persons is set forth under the caption “Related Person Transactions” in the 2023 Proxy Statement. Certain illustrative information regarding the payments to that may be owed, and the circumstances in which they may be owed, to HashiCorp’s named executive officers in a change of control of HashiCorp is set forth under the caption “Executive Compensation—Potential Payments upon Termination or Change in Control” in the 2023 Proxy Statement. With respect to Ms. St. Ledger, certain of such illustrative information is contained in the Current Report on Form 8-K filed with the SEC on June 7, 2023 (and is available at > > Promptly after filing the definitive Transaction Proxy Statement with the SEC, HashiCorp will mail the definitive Transaction Proxy Statement and a WHITE proxy card to each stockholder entitled to vote at the special meeting to consider the Transaction. STOCKHOLDERS ARE URGED TO READ THE TRANSACTION PROXY STATEMENT (INCLUDING ANY AMENDMENTS OR SUPPLEMENTS THERETO) AND ANY OTHER RELEVANT DOCUMENTS THAT HASHICORP WILL FILE WITH THE SEC WHEN THEY BECOME AVAILABLE BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION. Stockholders may obtain, free of charge, the preliminary and definitive versions of the Transaction Proxy Statement, any amendments or supplements thereto, and any other relevant documents filed by HashiCorp with the SEC in connection with the Transaction at the SEC’s website ( Copies of HashiCorp’s definitive Transaction Proxy Statement, any amendments or supplements thereto, and any other relevant documents filed by HashiCorp with the SEC in connection with the Transaction will also be available, free of charge, at HashiCorp’s investor relations website (, or by emailing HashiCorp’s investor relations department ( > ::: > > ::: spoiler Forward-Looking Statements > This communication may contain forward-looking statements that involve risks and uncertainties, including statements regarding (i) the Transaction; (ii) the expected timing of the closing of the Transaction; (iii) considerations taken into account in approving and entering into the Transaction; and (iv) expectations for HashiCorp following the closing of the Transaction. There can be no assurance that the Transaction will be consummated. Risks and uncertainties that could cause actual results to differ materially from those indicated in the forward-looking statements, in addition to those identified above, include: (i) the possibility that the conditions to the closing of the Transaction are not satisfied, including the risk that required approvals from HashiCorp’s stockholders for the Transaction or required regulatory approvals to consummate the Transaction are not obtained, on a timely basis or at all; (ii) the occurrence of any event, change or other circumstance that could give rise to a right to terminate the Transaction, including in circumstances requiring HashiCorp to pay a termination fee; (iii) possible disruption related to the Transaction to HashiCorp’s current plans, operations and business relationships, including through the loss of customers and employees; (iv) the amount of the costs, fees, expenses and other charges incurred by HashiCorp related to the Transaction; (v) the risk that HashiCorp’s stock price may fluctuate during the pendency of the Transaction and may decline if the Transaction is not completed; (vi) the diversion of HashiCorp management’s time and attention from ongoing business operations and opportunities; (vii) the response of competitors and other market participants to the Transaction; (viii) potential litigation relating to the Transaction; (ix) uncertainty as to timing of completion of the Transaction and the ability of each party to consummate the Transaction; and (x) other risks and uncertainties detailed in the periodic reports that HashiCorp files with the SEC, including HashiCorp’s Annual Report on Form 10-K. All forward-looking statements in this communication are based on information available to HashiCorp as of the date of this communication, and, except as required by law, HashiCorp does not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made. > ::: RIP Hashicorp, any good alternatives? Use most of their stack.

Introducing, a Directory of Companion Apps for Self-Hosted Software
Not my website. Interested to see how this will play out though!
Introducing, a Directory of Companion Apps for Self-Hosted Software

Gluetun: The Little VPN Client That Could
cross-posted from: > [Promoting] Gluetun: The Little VPN Client That Could > > My journey with docker started with a bunch of ill fated attempts to get an OpenVPN/qBittorrent container running. The thing ended up being broken and never worked right, and it put me off of VPN integration for another year or so. > > Then recently I found [Gluetun](…and holy fucking cow. This thing is the answer to every VPN need I could possibly think of. I have set it up with 3 different providers now, and it has been more simple and reliable than the clients made by the VPN providers themselves every time. > > If you combine the power of Gluetun with the power of [Portainer](, then you can even easily edit settings for your existing containers and hook them up to a VPN connection in seconds (or disconnect them). Just delete the forwarded ports in the original container, select the Gluetun container as the network connection, and then forward the same ports in Gluetun. Presto, you now have a perfectly functioning container connected to a VPN with a killswitch. > > So if any of y’all on the high seas have considered getting more serious about your privacy, don’t do what I did and waste a bunch of time on a broken container. Use Gluetun. Love Gluetun. Gluetun is the answer.

Should I or should I not use a VLAN? I have trouble understanding the benefits for home use
Hey everyone, I am completely stripping my house and am currently thinking about how to set up the home network. This is my usecase: - home server that can access the internet + homeassistant that can access IoT devices - KNX that I want to have access to home assistant and vice versa - IoT devices over WiFi (maybe thread in the future) that are the vast majority homemade via ESPHome. I want them to be able to access the server and the other way around. (Sending data updates and in the future, sending voice commands) - 3 PoE cameras through a PoE 4 port switch - a Chromecast & nintendo switch that need internet access Every router worth anything already has a guest network, so I don't see much value in separating out a VLAN in a home use case. My IoT devices work locally, not through the cloud. I want them to work functionally flawless with Home assistant, especially anything on battery so it doesn't kill its battery retrying until home assistant polls. The PoE cameras can easily have their internet access blocked on most routers via parental controls or similar and I want them to be able to send data to the on-server NVR I already have PiHole blocking most phone homes from the chromecast or guest devices. So far it seems like a VLAN is not too useful for me because I would want bidirectional access to the server which in turn should have access from the LAN and WiFi. And vice versa. Maybe I am not thinking of the access control capability of VLANs correctly (I am thinking in terms of port based iptables: port X has only incoming+established and no outgoing for example). I figure if my network is already penetrated, it would most likely be via the WiFi or internet so the attack vector seems to not protect from much in my specific use case. Am I completely wrong on this?

Selfhosted messenger/community software like discord
Hello, I would like to hear your opinions about a good selfhosted messenger like discord. To list exactly what I mean by that is: - No need for federation ( only will be used by friends ) - E2EE - Support for direct messages - Support for discord like server management by which I mean the ability to set rooms and topics for such rooms. From what I know, this seems to be more similar to slack alternative's but wanted to hear opinions of others. I have been thinking about either matrix, mattermost, or revolt chat. I already have a XMPP server, but setting up encryption and client's has turned away quite a few people I would like to get onto this platform. EDIT: As pointed by other people E2EE isn't needed for my usecase if no federation.

what’s your fav recipe manager?
I use nextcloud cookbook but I would really love another or a federated alternative. It does its job but I don't think other people I know would use it.

Self hosted language translators?
I am a teacher and have students who speak many different languages. The most common ones are Chinese, Spanish and Portuguese, but we have other folks speaking other languages as well. I wish to translate all my notes, lecture subtitles, and topic-exercise documents to other languages. Moreover every year, I have to update my teaching material and include new stuff. So, muddling though everything manually is not much of an option as it might take up to two months just for this task. Are there nice self hosted and libre/open source solutions out there for this task?

What to be aware of before opening port 25 on a postfix Raspberry Pi?
I have a raspberry pi running postfix. I Realised unless I open port 25 I absolutely cannot receive emails (I have 587 open and can send but not receive them). However I heard there are scaries online which someone could potentially send emails from your server without consent. I believe as well my ISP doesn't block port 25. Is there anything I should do right now before opening port 25, or should everything be safe enough?

> Self-hosted pastebin powered by Git, open-source alternative to Github Gist. - thomiceli/opengist

Openvpn / pihole - change administration link
Hi So, I've had openvpn running on a raspberry pi for over a year with no problems. After some braining I concluded that I also should run pihole on the same raspberry pi. After all it's "just a DNS resolver". Both are running great together on the raspberry. But, I can't administer openvpn because they both use "<IP of raspberry>/admin", and because I installed pihole last, well I can now only administer pihole. So, how do I make one of them use something like "<ip of raspberry>/admin2" ? Thanks

  • haui
  • English
  • 19h
Wireguard connecting 1:n docker containers for object storage
Hi folks, I'm trying to put my newly acquired HDDs to good use and at the same time manage the minuscule amount of storage my vps has. Since it is hosting several fedi services, I need some external storage and I figured I'd just tunnel some object storage from home. So I set up a working wireguard connection from my homenetwork to the vps, connected the object storage (GarageHQ) to one end and probably will connect the fedi services (lemmy, mastodon, matrix, peertube) to the other. Here comes the issue: do I have to make the respective wireguard instances a proxy for the services to be able to speak with each other or do I even have to make a site to site connection to connect the two docker networks? The connection would look something like this: GarageHQ---WireguardHS---WireguardVPS---Mastodon|Mastodon---NginxPM---OpenWeb Anyone got something like this to work so far? Am I overlooking something major? Thanks for reading, have a good one.

  • Lemmy
  • English
  • edit-2
How do I setup my own FOSS shopping website for my business?
Hello, I don't have much experience in self-hosting, I'm buying a ProtonVPN subscription and would like to port forward. I have like no experience in self-hosting but a good amount in Linux. I'm planning on using Proxmox VE with a YunoHost VM. I already have a domain name from Njalla. I'm setting up a website for my computer store. I want it to have listings and payment options so they can check out there. I want my customer data to be secure. I don't want it to have any JavaScript or nasty trackers. I want it to be FOSS. Any help is highly appreciated!

[Question] Self hosted setup for monitoring Self-hosted services?
Hi all. I just set-up my first self-hosting server with NextCloud, Immich and a VPN server. I was wondering if there is a tool or layer of tools which would help me monitor my server and the services including running stats, resource usage stats, system logs, access logs, etc? I read that Grafana Loki along with Prometheus could possibly help me with this. I just wanted to ask that - should I explore these two tools or do we have some other and better(suiting to my needs) tools? Please recommend Open Source tools only. Preferably Docker, or Linux based otherwise. Thank you :))

Hey all, I've spent the majority of the last year hammering away at Pinepods. It's a Rust based podcast management system that manages podcasts with multi-user support and relies on a central database with clients to connect to it. It's complete with a browser based client and your podcasts and settings follow you from device to device due to everything being stored on the server. AntennaPod is great and all but sometimes I want to listen to podcasts from my laptop. Here's a great solution to that problem. There's also a client edition that you can download and install. Search both The Podcast Index or Itunes to browse through shows and episodes, Import or export opmls of your podcasts, utilize the standard of podcasting 2.0. It's all fully dockerized and you can have an instance of your own up and running in 5 mins! If you're on the fence you can try it out without installing the server too! Check the website for more info! There's a lot more to come down the pipeline as well, such as a lightweight client to stream episodes to and alternative database support. Now is the perfect time to check it out and enjoy continued feature updates! Feel free to open issues or PRs if you experience any problems. Or drop a line on the discord. I'm happy to help! Official website: Github: Discord:

Server behind CGNAT - Reverse VPN? Or how to bypass? a short sentence...the title. I have a server in a remote location which also happens to be under CGNAT. I only get to visit this location once a year at best, so if anything goes off...It stays off for the rest of that year until I can go and troubleshoot. I have a main location/home where everything works, I get a fixed IP and I can connect multiple services as desired. I'd like to make this so I could publish internal servers such as HA or similar on this remote location, and reach them in a way easy enough that I could install the apps to non-tech users and they could just use them through a normal URL. Is this possible? I already have a PiVPN running wireguard on the main location, and I just tested an LXC container from remote location, it connects via wireguard to the main location just fine, can ping/ssh machines correctly. But I can't reach this VPN-connected machine from the main location. Alternatively, I'm happy to listen to alternative solutions/ideas on how to connect this remote location to the main one somehow. Thanks!

I just won an auction for 25 computers. What should I setup on them?
I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer) In the long run I plan on selling 15 or so of them to friends and family for cheap, and I'll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one. But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz? Edit to add: Specs based on the auction listing and looking computer models: - 4th gen i5s (probably i5-4560s or similar) - 8GB of DDR3 RAM - 256GB SSDs - Windows 10 Pro (no mention of licenses, so that remains to be seen) - Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height) Possible projects I plan on doing: - Proxmox cluster - Baremetal Kubernetes cluster - Harvester HCI cluster (which has the benefit of also being a Rancher cluster) - Automated Windows Image creation, deployment and testing - Pentesting lab - Multi-site enterprise network setup and maintenance - Linpack benchmark then compare to previous TOP500 lists

I keep finding new apps, so I'll keep sharing! Tonight's fish on the hook, is an Audiobook client for Jellyfin and AudioBookShelf, written in Swift/SwiftUI. > Apart from playing audiobooks, plappa also syncs playback status via iCloud and allows you to download audiobooks for offline listening. It currently runs on iPhone and iPad, a Mac and Apple TV version are planned - [roadmap]( [Github Repo]( # About plappa requires a Jellyfin or AudioBookShelf Server to work. If you don't know Jellyfin or AudioBookShelf and would like to learn more, check out both at [Jellyfin website]( or [Jellyfin GitHub repo]( and [AudioBookShelf website]( or [AudioBookShelf GitHub repo]( ### Folder structure and formats for Jellyfin plappa should be able to handle all common audio file formats, but is built for and tested mainly with MP3 and M4B files. I always test plappa using the [most common organization scheme for books](, but other folder structures should work fine, plappa just searches for audio files recoursively, grouping by album. ### Metadata Most metadata will be taken from Jellyfin/AudioBookShelf, plappa additionally reads the `composer` field for the narrator name and (if applicable) chapters from the file metadata. ## Roadmap You can see the full roadmap in the [plappa project](, the short version is: - [x] iOS App with all basic features - [x] Carplay support (WIP) - [ ] Apple Watch App - [ ] Mac App - [ ] Apple TV App - [x] Support for [AudioBookShelf servers]( (WIP)

  • haui
  • English
  • 3d
Self hosted remote storage for VPS?
I'm currently running both a home server and a VPS. The former is not reachable through the internet, only through vpn. The latter hosts public services. The VPS is regularly cutting it very close with storage and today I messed up and crashed the whole stack trying to make an impromptu backup. Lesson learned: we need more storage! I could just **rent** more storage but just today I updated my home server with 16 TB of raid 1 enterprise HDDs. So I thought I could maybe do a (wireguard) VPN tunnel directly to some storage service that I host on my homeserver. The upload is not great but realistically I dont need much. The important stuff stays on the VPS. Mainly videos, pictures and other stuff that doesnt get accessed a lot should go there. The rest should be "cached" at the VPS. I would have to host wireguard on a server port, only have it access one folder which doesnt contain anything important, forward the port on the router and have the vps have the keys. Even if someone gets into the VPS and steals the keys, they only get that one file storage folder. Has anyone done this? Are there services that do this or do I just host wireguard and thats it? Thanks for reading. Have a good one! :)

How to use a custom domain with Tailscale on a Synology NAS?
I've spent too many hours googling this stuff without a solution in sight that I'm able to understand. I am moderately new to selfhosting, especially the networking aspect. To put it simply, all I want is to be able to access my services through Tailscale by using I have gotten so far to point my domain to my Tailscale IP (using Cloudflare's DNS), so that I don't have to copy paste the Tailscale IP, but that means I still have to type in the ports to the services. Between the posts saying Tailscale can handle this, to the ones saying Synology can do it, and the remaining posts saying to use a reverse proxy (and the ones saying reverse proxy are a bad idea because of Synology stuff) I am now very lost. The terminology is exhausting and everyone is already so knowledgeable that they skip the basic steps and go straight to complex, short answers. I'd like to keep using Tailscale, as I don't want to deal with security issues and SSL certificates and all that, and if possible I'd like to avoid using a reverse proxy such as npm or Caddy if there's a built in Tailscale/Synology solution that works. To me more services just means more stuff that can break, and I really just want this stuff to work without fiddling with it. Thanks for any help you can provide

How do you guys handle reverse proxies in rootless containers?
I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right? If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.

Send WoL signal though Opnsense networks
So i dont know what im doing wrong. I have 3 interfaces on opnsense 1:Server= 2:Wlan= 3:wireguard= And lastly the Wan with its default configuration. I want to be able to Send a WOL packet though the Wlan network to wake up my PC that is inside Server network. In Firewall>rules>Wlan I made a new rule like this: Action=Pass Interface=Wlan Direction=in TCP/IP=ipv4 Protocol=UDP Source=any Destination=Server address Destination port range=from 7 to 7 When i try sending a wol signal when my pc went to sleep a few minutes later i think the firewall rule goes though, but when i try to send from another device after lots of time later it doesn't go though i think. Im using Moonlight to send its signal, does anyone have been thought this problem. Thank you

[Tailscale] Can’t connect VPS to local network?
I set up Headscale and Tailscale using Docker on a VPS, which I want to use as my public IPv4 and Reverse Proxy to route incoming traffic to my local network and e. g. my home server. I also set up Tailscale using Docker on my home server and connected both to my Headscale server. I am able to ping on Tailscale container from the other and vice versa and set up *--advertise-routes=* on my home server as well as *--accept-routes* on my VPS, but I can't ping local IP addresses from my VPS. What am I missing? Both container are connected to the host network, I have opened UDP ports 41641 and 3478 on my VPS.

How are you making services remotely accessible?
I need help figuring out where I am going wrong or being an idiot, if people could point out where... I have a server running Debian 12 and various docker images (Jellyfin, Home Assistant, etc...) controlled by portainer. A consumer router assigns static Ip addresses by MAC address. The router lets me define the IP address of a primary/secondary DNS. The router registers itself with DynDNS. I want to make this remotely accessible. From what I have read I need to setup a reverse proxy, I have tried to follow various guides to give my server a cert for the reverse proxy but it always fails. I figure the server needs the dyndns address to point at it but I the scripts pick up the internal IP. How are people solving this?

Ended up trying to install this again and it only went and bloody worked. I'm happy!

Managing servers in multiple locations
How do you manage multiple machines in different locations. The use case is something like this, i want self hosted different apps in different locations as redundancy. Something like i put one server in my house, one in my dad’s house, couple other in my siblings/friends house. So just in case say machine in my house down or internet down. It can fallback to the other machines. I was thinking using docker swarm on multiple raspberry pis. But opening port on router seems not secure at all. How do i connect those machine together? Should i put wireguard on server # 1 and other servers will connect to that server. But if the network/machine failed on thar server; everything else will not work.

Came across this in-development app, that seems really clean. Is available for iOS/MacOS [already]( It will be available on PlayStore and Fdroid eventually, but there is an [APK]( for those who want to try it out. **Github Project**: # Screenshots ![alt text]( ![alt text]( ![alt text]( ![alt text]( ![alt text](

fanless hardware for selfhosting lxd/docker
Hi, I am planning to setup a home servering can get a remote desktop on via the webinterface Guacamole. I havehadit ona huge Servet before but this Time I really need it to Bea fanless server. I need 32gb+ of ram and at least 1tb SSD. Enough CPU power to serve a linux desktop running in lxd via a quacamole in a docker image. Any recommendationsfor good hardware that isnt crazy expensive?

Why is Matrix mentioned more often than XMPP in self hosted forums?
I'm looking into hosting one of these for the first time. From my limited research, XMPP seems to win in every way, which makes me think I must be missing something. Matrix is almost always mentioned as the de-facto standard, but I rarely saw arguments why it is better than XMPP? Xmpp seems way easier to host, requiring less resources, has many more options for clients, and is simpler and thus easier to manage and reason about when something goes wrong. So what's the deal?

After thinking for about a year about it I decided to rename the project to 🚀NetAlertX. This will help prevent confusion about which fork someone is using, and differentiate it from the now stale upstream project. With about 1800 or so commits over the stale project, I thought, this project deserved a new name. It will also remove the confusion about only supporting Raspberry Pi's 😵 On top of the rename, I implemented ✨unlimited icons - just find an SVG you like and use it 😄. The rename from PiAlert to NetAlertX should be pretty straightforward and existing setups should work fine, no manual migration steps should be necessary. Still, caution is recommended. Check this thread for edge-cases and the guide if you decide to change your docker-compose.

Guide: How to run MediaCMS with Docker
This is a followup to [my previous post]( If you want to bind volumes outside of Docker, this is what you need to do. There was a huge permission and volume mapping problem. I mention github issues that helped me [here]( I hope that will help noobs and insecure people like me. ------------ ```bash cd /srv/path/Files ``` ```bash git clone ``` ```bash cd /srv/path/Files/mediacms ``` ``` mkdir postgres_data \ && chmod -R 755 postgres_data ``` ``` nano docker-compose.yaml ``` ```yaml version: "3" services: redis: image: "redis:alpine" restart: always healthcheck: test: ["CMD", "redis-cli","ping"] interval: 30s timeout: 10s retries: 3 migrations: image: mediacms/mediacms:latest volumes: - /srv/path/Files/mediacms/deploy:/home/ - /srv/path/Files/mediacms/logs:/home/ - /srv/path/Files/mediacms/media_files:/home/ - /srv/path/Files/mediacms/cms/ environment: ENABLE_UWSGI: 'no' ENABLE_NGINX: 'no' ENABLE_CELERY_SHORT: 'no' ENABLE_CELERY_LONG: 'no' ENABLE_CELERY_BEAT: 'no' ADMIN_USER: 'admin' ADMIN_EMAIL: 'admin@localhost' ADMIN_PASSWORD: 'complicatedpassword' restart: on-failure depends_on: redis: condition: service_healthy web: image: mediacms/mediacms:latest deploy: replicas: 1 ports: - "8870:80" #whatever:80 volumes: - /srv/path/Files/mediacms/deploy:/home/ - /srv/path/Files/mediacms/logs:/home/ - /srv/path/Files/mediacms/media_files:/home/ - /srv/path/Files/mediacms/cms/ environment: # ENABLE_UWSGI: 'no' #keep commented ENABLE_CELERY_BEAT: 'no' ENABLE_CELERY_SHORT: 'no' ENABLE_CELERY_LONG: 'no' ENABLE_MIGRATIONS: 'no' db: image: postgres:15.2-alpine volumes: - /srv/path/Files/mediacms/postgres_data:/var/lib/postgresql/data/ restart: always environment: POSTGRES_USER: mediacms POSTGRES_PASSWORD: mediacms POSTGRES_DB: mediacms TZ: Europe/Paris healthcheck: test: ["CMD-SHELL", "pg_isready", "--host=db", "--dbname=$POSTGRES_DB", "--username=$POSTGRES_USER"] interval: 30s timeout: 10s retries: 5 celery_beat: image: mediacms/mediacms:latest volumes: - /srv/path/Files/mediacms/deploy:/home/ - /srv/path/Files/mediacms/logs:/home/ - /srv/path/Files/mediacms/media_files:/home/ - /srv/path/Files/mediacms/cms/ environment: ENABLE_UWSGI: 'no' ENABLE_NGINX: 'no' ENABLE_CELERY_SHORT: 'no' ENABLE_CELERY_LONG: 'no' ENABLE_MIGRATIONS: 'no' celery_worker: image: mediacms/mediacms:latest deploy: replicas: 1 volumes: - /srv/path/Files/mediacms/deploy:/home/ - /srv/path/Files/mediacms/logs:/home/ - /srv/path/Files/mediacms/media_files:/home/ - /srv/path/Files/mediacms/cms/ environment: ENABLE_UWSGI: 'no' ENABLE_NGINX: 'no' ENABLE_CELERY_BEAT: 'no' ENABLE_MIGRATIONS: 'no' depends_on: - migrations ``` ```bash docker-compose up -d ``` CSS will probably be missing because reasons, so bash into `web` container ```bash docker exec -it mediacms_web_1 /bin/bash ``` Then ```bash python collectstatic ``` No need to reboot

Self Host Pen Testing
Anyone have any good external pen testing tools that you've used on your self hosted setup? Mine is pretty secure overall but I would like to be able to scan the WAN for vulnerabilities or misconfigurations just to make sure I haven't missed anything.

publication croisée depuis : > I'm asking this because I'm very new to the Yocto project. I'm going through the documentation but it's a bit overwhelming to me, looking at what `Fishwaldo` has achieved (link embedded in the title). I would like to learn how he did it and how I could create my own image based on a supported kernel with necessary drivers and boot the `Star64` board. > > From what I understand, he: > > 1. Forked the kernel tree and created his own branch. > 2. Put in the necessary drivers (including OEM drivers) - I'm not really sure how he did it since I'm new to Linux (any tips would be appreciated!). > 3. I can't quite make out the layers he used to build the minimal image (I will study the guide more to figure this out). > 4. Finally, he compiled it, alongside compiling U-boot, partitioned the SD-card and booted the device. > > Am I right? I'm missing a lot of steps in the middle, would really appreciate any help in understanding this. Thanks!

Recommendations please: Self-hosted web site analytics
Hello y'all! I have my personal (static) website / blog running on netlify out on the public internet. Netlify, in case you're not familiar, is not a traditional web host, so I can't add databases or anything else like that on the server itself. Right now, that site has zero analytics / visitor tracking and I've decided I want to fix that. I want to know how many people visited my site and which pages they looked at. I am NOT looking to monetize anything though, to be clear. I want to self-host that analytics service at home, on my home server, but I need two things, please: 1. Recommendations for which app to use. I've checked out Umami and Plausible and they both look good for my meager purposes. But please - let me know which app makes sense for a personal web site with low-ish traffic. Is there something simpler I could do? 2. Help getting the reverse proxy set up so my public web site can send analytics data into my home server. I would prefer this to be entirely under my control, so no CloudFlare or Tailscale, for instance. Is Caddy an option? I get really confused really quickly about this level of networking, to be clear, so maybe I just need a really plain-English guide to handling this sort of thing? Thanks for any / all ideas! Y'all so totally rock! **ETA:** A little more info about Netlify and why I can't install or use tools other traditional web hosts might offer. ** SECOND EDIT**: Thanks to for the goatcounter suggestion, I am trying that out now for the analytics side of this. Getting it set up was easy and free, using their server. (I know, I know...) If I still like the app after the next couple of weeks, I will move it in-house and self-host. That gives me a couple of weeks to figure out my second issue above, how to have my public web site make requests to my self-hosted, behind the firewall/NAT service. Yay, more learning!

For those who selfhost their music services, what are your must have plugins for beets and/or Musicbrainz
I’m curious what plugins people like the most and find the most useful.

    Create a post

    A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.


    1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

    2. No spam posting.

    3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

    4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

    5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

    6. No trolling.


    Any issues on the community? Report it using the report flag.

    Questions? DM the mods!

    • 0 users online
    • 182 users / day
    • 578 users / week
    • 1.42K users / month
    • 5.14K users / 6 months
    • 1 subscriber
    • 2.55K Posts
    • Modlog
    A generic Lemmy server for everyone to use.

    The World’s Internet Frontpage Lemmy.World is a general-purpose Lemmy instance of various topics, for the entire world to use.

    Be polite and follow the rules ⚖ (

    Get started

    See the Getting Started Guide

    Donations 💗

    If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

    If you can, please use / switch to Ko-Fi, it has the lowest fees for us

    Ko-Fi (Donate)

    Bunq (Donate)

    Open Collective backers and sponsors


    Liberapay patrons

    Join the team 😎

    Check out our team page to join

    Questions / Issues

    More Lemmy.World

    Mastodon Follow



    Alternative UIs

    Monitoring / Stats 🌐

    Mozilla HTTP Observatory Grade

    Lemmy.World is part of the FediHosting Foundation