blog/content-org/all-posts.org

526 lines
24 KiB
Org Mode
Raw Normal View History

2023-04-17 15:54:12 +00:00
#+hugo_base_dir: ../
2024-09-07 21:26:10 +00:00
* DONE Chatting in the 21st century :debian:english:selfhost:free_software:matrix:irc:
CLOSED: [2024-09-07 Sat 17:25]
:PROPERTIES:
:EXPORT_FILE_NAME: chatting-21st-century
:END:
Several people have been asking me to explain and/or write about my
solution for chatting nowadays. I realize that the current scenario
is much more complex than, say, 10 or 20 years ago. Back then, this
post would probably be more about the IRC client I used than about
different chatting technologies.
I have also spent a non trivial amount of time setting things up the
way I want, so I understand that it's about time to write about my
setup not only because I think it can be helpful to others, but also
because I would like to document things for myself.
** The backbone: Matrix
I chose to use [[https://matrix.org][Matrix]] as the place where I integrate everything.
Despite there being some [[https://anarc.at/blog/2022-06-17-matrix-notes/][heavy (and justified) criticism]] on the
protocol itself, it serves me well for what I need right now.
Obviously, I don't like the fact that I have to provide Matrix and all
of its accompanying bridges a VPS with 4GB of RAM and 3 vCPUs, but I
think that that ship has sailed, unfortunately.
In an ideal world, I would be using [[https://xmpp.org/][XMPP]] and dedicating only a
fraction of the resources I'm using today to have a full chat system.
And since I have been running my personal XMPP server for more than a
decade now, I did try to find a solution that would allow me to keep
using it, but unfortunately the protocol became almost a hobbyist
thing, so there's that.
** A few disclaimers
I self-host everything, including my Matrix server. Much of what I
did won't work if you don't self-host Matrix, so keep that in mind.
This won't be a post /teaching/ you how to deploy the services. My
intention is to describe /what I use/ and for /what purpose/.
Also, as much as I try to use Debian packages for everything I do, I
opted to deploy all services using a community-maintained Ansible
playbook which is very well written and organized:
[[https://github.com/spantaleev/matrix-docker-ansible-deploy][matrix-docker-ansible-deploy]].
Last but not least, as I said above, you will likely need a machine
with a good amount of RAM, CPU and storage, especially if you deploy
[[https://github.com/element-hq/synapse][Synapse]] as your Matrix homeserver (which is what I recommend if you
plan to use the bridges I'll mention). My current VPS has 4GB of RAM,
3 vCPUs and 80GB of storage (of which I'm currently using
approximately 55GB).
** Problem #1: my Matrix client(s)
There are [[https://matrix.org/ecosystem/clients/][a lot of clients]] that can talk the Matrix protocol, but most
of them are either web clients or GUI programs. I live on the
terminal, more specifically inside Emacs, so I settled for the amazing
[[https://github.com/alphapapa/ement.el][ement.el]] Emacs mode. It works surprisingly well, but unfortunately
doesn't support end-to-end encryption out of the box; for that, you
have to hook it up with [[https://github.com/matrix-org/pantalaimon/][pantalaimon]]. Unfortunately, the project seems
abandoned and therefore I don't recommend you to use it. I don't use
it myself.
When I have to reply some E2E encrypted message from another user, I
go to my web browser and use my self-hosted [[https://app.element.io/][Element]] client. It's a
nuisance, but one that I'm willing to accept because of security
concerns.
If you're into web clients and don't want to use Element (because it
is heavy), you can try [[https://github.com/ajbura/cinny][Cinny]]. It's lightweight and supports a decent
set of features.
If you're a terminal lover but don't use Emacs, you may want to try
[[https://github.com/tulir/gomuks][gomuks]] or [[https://iamb.chat/][iamb]].
** Problem #2: IRC bridging
There are basically two types of IRC bridges for Matrix:
- The regular and most used [[https://github.com/matrix-org/matrix-appservice-irc][matrix-appservice-irc]]. This bridge /takes
Matrix to IRC/ (think of IRC users with the =[m]= suffix appended to
their nicknames), and is what the [[https://matrix.org][matrix.org]] and other big
homeservers (including [[https://matrix.debian.social][matrix.debian.social]]) use. It's a complex
service which allows thousands of Matrix users to connect to IRC
networks, but that unfortunately [[https://libera.chat/news/matrix-bridge-disabled-retrospective][has complex problems]] and is only
worth using if you intend to host a community server.
- A bouncer-like bridge called [[https://github.com/hifi/heisenbridge][Heisenbridge]]. This is what I use
personally. It /takes IRC to Matrix/, which means that people on
IRC will /not/ know that you're using Matrix. This bridge is much
2024-09-09 02:10:11 +00:00
simpler, and because it acts like a bouncer it's pretty much
2024-09-07 21:26:10 +00:00
impossible for it to cause problems with the IRC network.
Due to the fact that I sometimes like to use other IRC clients, I
still run a regular [[https://wiki.znc.in/ZNC][ZNC bouncer]], and I use Heisenbridge to connect to
my ZNC. This means that I can use, e.g., ERC inside Emacs /and/ my
Matrix bridge at the same time. But you don't necessarily need to run
another bouncer; you can simply use Heisenbridge and connect directly
to the IRC network(s) you want.
A word of caution, though: unlike ZNC, Heisenbridge doesn't support
per-user configuration when you use it in bouncer mode. This is the
reason why you need to self-host it, and why it's not possible to
offer the service to other users (they would have access to your IRC
network configuration otherwise).
It's also worth talking about logs. I find that keeping logs of
everything that goes on IRC has saved me a bunch of times, and so I
find it really important to continue doing that. Unfortunately,
neither =ement.el= nor Element support logging things out of the box
(at least not that I know). This is also one of the reasons why I
still keep my ZNC around: I configure it to log everything.
** Problem #3: Telegram
I don't use Telegram myself, but unfortunately several people from the
Debian community do, especially in Brazil. There is a whole Debian
community on Telegram, and I wanted to be able to bridge our Debian
Matrix channels to their Telegram counterparts.
I am currently using [[https://github.com/mautrix/telegram][mautrix-telegram]] for that, and it's working
great. You need someone with a Telegram account to configure their
credentials so that the bridge can connect to it, but afterwards it's
really easy to bridge channels together.
** Problem #4: GitLab webhooks
Something else I wanted to be able to do was to receive notifications
regarding new issues, merge requests and other activities from [[https://salsa.debian.org][Salsa]].
For this, I'm using [[https://github.com/maubot/maubot][maubot]], which is awesome and has a
[[https://plugins.mau.bot/][huge list of plugins]]. I'm using the [[https://github.com/maubot/gitlab][gitlab]] one.
** Final thoughts
Overall, I'm satisfied with the setup I have now. It has certainly
taken some time and effort to find the right tool for each problem I
needed to solve, and I still feel like there are some rough edges to
soften (like the fact that my Emacs client doesn't support E2E
encryption out of the box, or the whole logging situation), but
otherwise things are working fine and I haven't had any big problems
with the deployment. You do have to be much more careful about stuff
(for example, when I installed an unrelated service that "hijacked" my
Apache configuration and made Matrix's federation silently stop
working), though.
If you have more specific questions about any part of my setup, shoot
me an email and I'll do my best to help.
Happy chatting!
2024-06-13 01:19:49 +00:00
* DONE The Pagure Debian package is now orphan :debian:free_software:english:pagure:
2024-06-13 01:16:22 +00:00
CLOSED: [2024-06-12 Wed 21:16]
:PROPERTIES:
:EXPORT_FILE_NAME: pagure-debian-is-now-orphan
:END:
As [[* Planning to orphan Pagure on Debian][promised]] in the last post, I have now orphaned the Pagure Debian
package. Here's the full text I posted on the BTS:
#+BEGIN_QUOTE
After several years, I finally decided to orphan pagure :-(.
I haven't been using it as my personal forge anymore, and unfortunately
upstream development slowed down quite a bit after the main author and
maintainer stopped contributing regularly to the project. But that is
not to say that upstream is dead; they are still working towards
preparing the next release.
Pagure is a big package with several components and an extensive list of
build dependencies (and an even bigger testsuite which I never managed
to make fully work on Debian). It is not for the faint of heart, and
most of the time is usually spent fixing its (build) dependencies so
that it doesn't get removed from testing.
If I may, I would like to leave some suggestions for a future
maintainer.
- I never had the time to write dep8 tests, mainly because setting up
the software is not trivial. It would be great if the package had
more Debian-centric testing.
- Speaking of the hurdles to setting up Pagure, I believe the package
installation could be made a bit more automated using debconf. I
don't see a way to fully automate it (look at d/README.Debian), but
there is certainly room for improvement.
I also left a brief TODO list inside d/README.source; feel free to
tackle any item there!
I wish the next maintainer can have as much fun with the package as I
did when I first made it for Debian!
Thank you,
#+END_QUOTE
2024-08-08 17:30:52 +00:00
That's it. It was good while it lasted, but I needed to feel myself
unburdened so that I don't have that constant feeling of "I should be
properly maintaining this package...".
If you feel like you'd like to give it a try at maintaining Pagure,
now is the time!
* DONE Planning to orphan Pagure on Debian :english:debian:free_software:
CLOSED: [2024-02-25 Sun 22:23]
:PROPERTIES:
:EXPORT_FILE_NAME: planning-to-orphan-pagure
:END:
I have been thinking more and more about orphaning the [[https://tracker.debian.org/pagure][Pagure Debian
package]]. I don't have the time to maintain it properly anymore, and
I have also lost interest in doing so.
** What's Pagure
[[https://pagure.io/pagure][Pagure]] is a git forge written entirely in Python using pygit2. It was
almost entirely developed by one person, Pierre-Yves Chibon. He is
(was?) a Red Hat employee and started working on this new git forge
almost 10 years ago because the company wanted to develop something
in-house for Fedora. The software is amazing and I admire Pierre-Yves
quite a lot for what he was able to achieve basically alone.
Unfortunately, a few years ago Fedora [[https://communityblog.fedoraproject.org/making-a-git-forge-decision/][decided]] to move to Gitlab and
the Pagure development pretty much stalled.
** Pagure in Debian
Packaging Pagure for Debian was hard, but it was also very fun. I
learned quite a bit about many things (packaging and non-packaging
related), interacted with the upstream community, decided to dogfood
my own work and run my Pagure instance for a while, and tried to get
newcomers to help me with the package (without much success,
unfortunately).
I remember that when I had started to package Pagure, Debian was also
moving away from Alioth and discussing options. For a brief moment
Pagure was a contender, but in the end the community decided to
self-host Gitlab, and that's why we have [[https://salsa.debian.org][Salsa]] now. I feel like I
could have tipped the scales in favour of Pagure had I finished
packaging it for Debian before the decision was made, but then again,
to the best of my knowledge Salsa doesn't use our Gitlab package
anyway...
** Are you interested in maintaining it?
If you're interested in maintaining the package, please get in touch
with me. I will happily pass the torch to someone else who is still
using the software and wants to keep it healthy in Debian. If there
is nobody interested, then I will just orphan it.
* DONE Migrating my repositories to Forgejo :english:selfhost:free_software:
CLOSED: [2024-02-24 Sat 23:51]
:PROPERTIES:
:EXPORT_FILE_NAME: migrating-to-forgejo
:END:
After some thought, I decided to migrate my repositories to [[https://forgejo.org][Forgejo]].
I know, I know... The name sucks a little bit, but nothing is
perfect. Forgejo is a fork of [[https://gitea.com][Gitea]], and was created after [[https://gitea-open-letter.coding.social/][some drama]]
regarding Gitea Ltd taking over the development of the Gitea project.
I have to be honest and say that I'm growing tired of seeing so much
drama and confusion arise from Free Software communities and projects,
but in a way this is just a reflection of what's happening with the
world in general, and there's very little I can do about it. Anyway.
Deploying Forgejo was easy thanks to [[https://github.com/mother-of-all-self-hosting/mash-playbook][mash-playbook]], which is a project
I've been using more and more to deploy my services. I like how
organized it is, and the maintainer is pretty responsive. On top of
that, learning more about Ansible had been on my TODO list for quite a
while.
All of this means that I decided to move /away/ from [[https://sr.ht][Sourcehut]] (I
might use it as a mirror for my public repositories, though). I did
that because I wanted to self-host my git forge again (I've been doing
that for more than a decade if you don't count my migration to
Sourcehut last year). Not liking some of Sourcehut's creator's
opinions (and the way he puts them out there) may or may not have
influenced my decision as well.
** A Continuous Integration to rule them all
Something that I immediately missed when I setup Forgejo was a CI. I
don't have that many uses for it, but when I was using Sourcehut I
setup its build system to automatically publish this blog whenever a
new commit was made to its git repository. Fortunately,
=mash-playbook= also supports deploying [[https://woodpecker-ci.org/][Woodpecker CI]], so after
fiddling during a couple of days with the Forgejo ↔ Woodpecker
integration, I managed to make it work just the way I wanted.
** Next steps
Write more :-). Really... It's almost as if I like more to deploy
things than to write on my blog! Which is true, but at the same
isn't. I've always liked writing, but somehow I grew so conscious of
what to publish on this blog that I'm finding myself avoiding doing it
at all. Maybe if I try to change the way I look at the blog I'll get
motivated again. We'll see.
* DONE Using WireGuard to host services at home :english:howto:selfhost:wireguard:debian:
CLOSED: [2023-05-23 Tue 00:56]
:PROPERTIES:
:EXPORT_FILE_NAME: using-wireguard-host-services-home
:END:
It's been a while since I had this idea to leverage the power of
[[https://wireguard.org][WireGuard]] to self-host stuff at home. Even though I pay for a proper
server somewhere in the world, there are some services that I don't
consider critical to put there, or that I consider *too* critical to
host outside my home.
** It's only NATural
With today's ISP packages for end users, I find it very annoying the
amount of trouble they create when you try to host anything at home.
Dynamic IPs, NAT/CGNAT, port-blocking, traffic shapping are only a few
examples of methods or limitations that prevent users from making
local services reachable in a reliable way from outside.
** WireGuard comes to help
If you already pay for a VPS or a dedicated server somewhere, why not
use its existing infrastructure (and public availability) in your
favour? That's what I thought when I started this journey.
My initial idea was to use a reverse proxy to redirect external
requests to the service running at my home. But how could I make sure
that these requests reach my
dynamic-IP-behind-a-NAT-behind-another-NAT? Well, let's create a
tunnel! WireGuard is the perfect tool for that because of many
things: it's stateless, very performant, secure, and requires very
little configuration.
** Setting up on the server
On the server side (i.e., VPS or dedicated server), you will create
the first endpoint. Something like the following should do:
#+begin_src ini
[Interface]
PrivateKey = PRIVATE_KEY_HERE
Address = 10.0.0.1/32
ListenPort = 51821
[Peer]
PublicKey = PUBLIC_KEY_HERE
AllowedIps = 10.0.0.2/32
PersistentKeepalive = 10
#+end_src
A few interesting points to note:
- The =Peer= section contains information about the home service that
will be configured below.
- I'm using =PersistentKeepalive= because I have a dynamic IP at my
home. If you have a static IP, you could get rid of
=PersistentKeepalive= and specify an =Endpoint= here (don't forget
to set a =ListenPort= *below*, in the =Interface= section).
- Now you have an IP where you can forward requests to. If we're
talking about HTTP traffic, Apache and nginx are absolutely capable
of doing it. If we're talking about other kind of traffic, you
might want to look into other utilities, like [[https://www.haproxy.org/][HAProxy]], [[https://traefik.io/traefik/][Traefik]] and
others.
** Setting up at your home
At your home, you will configure the peer:
#+begin_src ini
[Interface]
PrivateKey = PRIVATE_KEY_HERE
Address = 10.0.0.2/32
[Peer]
PublicKey = PUBLIC_KEY_HERE
AllowedIps = 10.0.0.1/32
Endpoint = YOUR_SERVER:51821
PersistentKeepalive = 10
#+end_src
** A few notes about security
I would be remiss if I didn't say anything about security, especially
because we're talking about hosting services at home. So, here are a
few recommendations:
- Make sure to put your services in a separate local network. Using
VLANs is also a good option.
- Don't run services on your personal (or work!) computer, even if
they'll be running inside a VM.
- Run a firewall on the WireGuard interface and make sure that you
only allow traffic over the required ports.
Have fun!
* DONE Ubuntu debuginfod and source code indexing :english:ubuntu:debuginfod:debian:free_software:gdb:
CLOSED: [2023-05-13 Sat 16:43]
:PROPERTIES:
:EXPORT_FILE_NAME: ubuntu-debuginfod-source-code-indexing
:END:
You might remember that in my [[/posts/debuginfod-is-coming-to-ubuntu/][last post]] about the [[https://debuginfod.ubuntu.com][Ubuntu debuginfod
service]] I talked about wanting to extend it and make it index and
serve source code from packages. I'm excited to announce that this is
now a reality since the Ubuntu Lunar (23.04) release.
The feature should work for a lot of packages from the archive, but
not all of them. Keep reading to better understand why.
** The problem
While debugging a package in Ubuntu, one of the first steps you need
to take is to install its source code. There are some problems with
this:
- =apt-get source= required =dpkg-dev= to be installed, which ends up
pulling in a lot of other dependencies.
- GDB needs to be taught how to find the source code for the package
being debugged. This can usually be done by using the =dir=
command, but finding the proper path to be is usually not trivial,
and you find yourself having to use more "complex" commands like
=set substitute-path=, for example.
- You have to make sure that the version of the source package is the
same as the version of the binary package(s) you want to debug.
- If you want to debug the libraries that the package links against,
you will face the same problems described above for each library.
So yeah, not a trivial/pleasant task after all.
** The solution...
Debuginfod can index source code as well as debug symbols. It is
smart enough to keep a relationship between the source package and the
corresponding binary's Build-ID, which is what GDB will use when
making a request for a specific source file. This means that, just
like what happens for debug symbol files, the user does not need to
keep track of the source package version.
While indexing source code, debuginfod will also maintain a record of
the relative pathname of each source file. No more fiddling with
paths inside the debugger to get things working properly.
Last, but not least, if there's a need for a library source file and
if it's indexed by debuginfod, then it will get downloaded
automatically as well.
** ... but not a perfect one
In order to make debuginfod happy when indexing source files, I had to
patch =dpkg= and make it always use =-fdebug-prefix-map= when
compiling stuff. This GCC option is used to remap pathnames inside
the DWARF, which is needed because in Debian/Ubuntu we build our
packages inside chroots and the build directories end up containing a
bunch of random cruft (like =/build/ayusd-ASDSEA/something/here=). So
we need to make sure the path prefix (the =/build/ayusd-ASDSEA= part)
is uniform across all packages, and that's where =-fdebug-prefix-map=
helps.
This means that the package *must* honour =dpkg-buildflags= during its
build process, otherwise the magic flag won't be passed and your DWARF
will end up with bogus paths. This should not be a big problem,
because most of our packages do honour =dpkg-buildflags=, and those
who don't should be fixed anyway.
** ... especially if you're using LTO
Ubuntu enables [[https://gcc.gnu.org/onlinedocs/gccint/LTO-Overview.html][LTO]] by default, and unfortunately we are affected by an
[[https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109805][annoying (and complex) bug]] that results in those bogus pathnames not
being properly remapped. The bug doesn't affect all packages, but if
you see GDB having trouble finding a source file whose full path
starts without =/usr/src/...=, that is a good indication that you're
being affected by this bug. Hopefully we should see some progress in
the following weeks.
** Your feedback is important to us
If you have any comments, or if you found something strange that looks
like a bug in the service, please reach out. You can either send an
email to my [[https://lists.sr.ht/~sergiodj/public-inbox][public inbox]] (see below) or file a bug against the
[[https://bugs.launchpad.net/ubuntu-debuginfod][ubuntu-debuginfod project on Launchpad]].
2024-02-25 21:16:19 +00:00
* DONE Novo blog, novos links :pt___br:portugues:
2023-04-21 01:38:21 +00:00
CLOSED: [2023-04-20 Thu 21:38]
:PROPERTIES:
:EXPORT_FILE_NAME: novo-blog-novos-links
:END:
Eu sei que não posto aqui há algum tempo, mas gostaria de avisar os
2023-04-21 01:38:21 +00:00
meus leitores (hã!?) de que eu troquei a engine do blog pro [[https://gohugo.io][Hugo]].
Além disso, vocês vão notar que as URLs dos posts mudaram também
(elas não têm mais data, agora são só compostas pelo nome do post; mas
veja abaixo), e que também houve uma mudança na tag =pt_br=:
futuramente eu pretendo parar de postar coisas nela, e vou postar
somente usando a tag [[/tags/portugues][=portugues=]]. Se você acompanha o RSS/ATOM da tag
=pt_br=, por favor atualize o link.
As URLs antigas ainda vão funcionar porque elas estão sendo
redirecionadas pro lugar correto (cortesia do =mod_rewrite=). De
qualquer modo, se você salvou alguma URL de um post antigo, sugiro que
a atualize.
No mais, tudo deve funcionar "como de costume" (TM). Estou postando
direto do Emacs (usando [[https://gohugo.io][ox-hugo]]), e criei um setup bacana no [[https://sr.ht][Sourcehut]]
pra automaticamente publicar os posts assim que eu der o push deles
pro git. Hm, isso na verdade seria um bom tópico pra um post...
2024-02-25 21:16:19 +00:00
* DONE New blog, new links :en___us:english:
2023-04-21 01:27:02 +00:00
CLOSED: [2023-04-20 Thu 21:26]
2023-04-17 15:54:12 +00:00
:PROPERTIES:
2023-04-21 01:18:18 +00:00
:EXPORT_FILE_NAME: new-blog-new-links
2023-04-17 15:54:12 +00:00
:END:
2023-04-21 01:18:18 +00:00
I know I haven't posted in a while, but I'd like to let my readers
(who!?) know that I've switched my blog's engine to [[https://gohugo.io][Hugo]]. Along with
that change, there are also changes to post URLs (no more dates, only
the post name; but see below) and also a change to the =en_us= tag:
eventually, I will stop posting things under it and start posting
solely under [[/tags/english][=english=]]. If you're subscribed to the =en_us= RSS/ATOM
feed, please update it accordingly.
The old URLs should still work because they're being redirected to the
correct path now (thanks, =mod_rewrite=). Either way, if you have
bookmarked some old post URL I'd suggest that you update it.
Other than that, everything should be "the same" (TM). I'm posting
from Emacs (using [[https://ox-hugo.scripter.co][ox-hugo]]), and made quite a cool setup with [[https://sr.ht][Sourcehut]]
in order to automatically publish posts when I push them to the git
repo. Hm, his would actually be a good topic for a post...