Recently I have bought GoPro 12. It’s a really nice action camera, but I was interesting if it is
possible to use it as a webcam. I have found that GoPro 12 exposes a network interface when
connected over USB, so I can use curl
to switch the device to the webcam mode and listen the UDP
video stream. ffmpeg
can output into v4l2loopback
, so all messengers and browsers can use this
virtual device as a webcam.
Video latency is about 500-700ms on my i7-10700K, so it’s possible to use the device for video
calls.
There is a GitHub repo
gopro_as_webcam_on_linux which contains a
shell script that makes all required actions easier.
Suppose you have downloaded video from YouTube using yt-dlp and
VLC plays only sound, but there is no video. How to fix it?
- Open Preferences.
- Click Video.
- Select Output = XVideo Output (XCB).
- Restart VLC.
Unfortunately, Java Virtual Threads have some issues with deadlocks. For more information read the
article Java 21 Virtual Threads - Dude, Where’s My Lock?.
Java 21 introduces virtual threads. Asynchronous programming models are no longer necessary in many
cases, as we can assume that our code runs on virtual threads. This allows to simplify the code of
web applications significantly.
The Linux kernel has traditionally been compiled using GCC and binutils. However, it is now possible
to build the Linux kernel with Clang. Distributions such as Android, ChromeOS, OpenMandriva, and
Chimera Linux already use Clang-built kernels. ClangBuildLinux
is an effort to get the Linux kernel to compile with Clang.
FreeCAD 1.0 has been released today!
I use Solarized Dark color scheme for a long time. Also,
I use htop
utility to inspect system processes on my Linux machines. Unfortunately, some numbers
(like 1 min load average, and totals in the cpu/mem bar graphs) are barely visible with Solarized
Dark. Today I’ve found that htop
has “Broken Gray” theme that fixes this glitch. Just press F2,
select “Colors” and set an appropriate theme.
Testcontainers library looks interesting for Java applications
testing. I am going to try it soon.
This GitHub repo collects the artworks of the most
Linux distro.
I use Flatpak apps everyday. Flatseal is a
graphical utility to review and modify Flatpak apps permissions. It’s really worth a try.
Java 22 released.
Most of the text translation services like Google Translate use the cloud. It causes some privacy
issues. Bergamot is a software that translates texts locally without Internet
access.
There is a Rocky Linux 9 VM in the VirtualBox. It’s required to extend the root partition. By
default, RHEL-based distros use LVM. It allows to use
several physical disks as one logical volume.
-
Check the name of root partition device with df -h
:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 882M 0 882M 0% /dev/shm
tmpfs 353M 5.0M 348M 2% /run
/dev/mapper/rl-root 8.0G 1.2G 6.9G 14% /
/dev/sda1 960M 223M 738M 24% /boot
tmpfs 177M 0 177M 0% /run/user/0
The name of the root partition device is /dev/mapper/rl-root
. The volume group name is rl
and the logical volume name is root
.
-
Check logical volumes using a command lvs -a -o +devices
:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
root rl -wi-ao---- <8.00g /dev/sda2(256)
swap rl -wi-ao---- 1.00g /dev/sda2(0)
/dev/mapper/rl-root
using a physical device /dev/sda2
and has a size 8Gb.
-
Add one more 10Gb disk to the VM using the VirtualBox configuration screen.
-
Check a name of a new partition with fdisk -l
:
...
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
...
New disk name is /dev/sdb
.
-
Add new physical disk to the existing volume group: vgextend rl /dev/sdb
.
-
Resize root partition: lvextend -l +100%FREE /dev/mapper/rl-root
.
-
Resize file system: xfs_growfs /dev/mapper/rl-root
.
-
Check if the root partition was extended successfully with df -h
:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 882M 0 882M 0% /dev/shm
tmpfs 353M 5.0M 348M 2% /run
/dev/mapper/rl-root 18G 1.3G 17G 7% /
/dev/sda1 960M 223M 738M 24% /boot
tmpfs 177M 0 177M 0% /run/user/0
Today I have found GeeksforGeeks. This resource contains a lot of
useful information about data structures, algorithms, data science and other programming topics.
E.g. I have read about different tree traversal
techniques.
Found two nice Python cheatsheets:
small and extended.
It will help for those of you who use Python every day.
Why are slow computers from 80s were solving the same problems as modern computers nowadays? Why
1000x times faster hardware can freeze with text editors? Some everyday tasks become more
complicated. E.g. text rendering in 80s was just a coping of fixed-size bitmaps to video memory.
Nowadays, this process becomes a lot more sophisticated. It consists of several stages that depend
on each other and the processor can spend a lot of time on this widespread task.
I used to work with GitLab CI/CD, but today I’ve tried
GitHub Actions. It is similar solutions but there is one
important difference: GitLab CI/CD performs all workspace manipulations (like git clone
) in
a separate preconfigured container, so you can use any Docker image for your jobs. In GitHub Actions
everything works in one container, so you can’t use any Docker image that you want to use. You have
to install all essential tools that perform actions/checkout@v4
and similar things
(Node.JS stack). On the one side it’s less convenient, but on the other it allows to control all
workspace manipulation more precisely.
Sometimes you need to configure HTTPS for a web resource that works in a network without Internet
access. It’s impossible to use Let’s Encrypt to issue a certificate. One solution is using your own
certificate authority and sign certificates yourself.
Generate CA (Certificate Authority) key and certificate with a command:
openssl req -x509 -nodes \
-newkey RSA:2048 \
-keyout root-ca.key \
-days 3650 \
-out root-ca.crt \
-subj '/C=GB/ST=London/L=London/O=Home/CN=Home'
Copy root-ca.crt
to /usr/local/share/ca-certificates
and execute update-ca-certificates
. After
this action, utilities like curl
will trust certificates signed with a custom CA.
It’s required to add a new CA into your browser. In Firefox open Settings -> Privacy & Security,
find “View Certificates…” button and add root-ca.crt
.
Generate certificate with a command:
openssl x509 -req \
-CA root-ca.crt \
-CAkey root-ca.key \
-in example.com.csr \
-out example.com.crt \
-days 3650 \
-CAcreateserial \
-extfile <(printf "subjectAltName = DNS:example.com\nauthorityKeyIdentifier = keyid,issuer\nbasicConstraints = CA:FALSE\nkeyUsage = digitalSignature, keyEncipherment\nextendedKeyUsage=serverAuth")
As a result there are two files:
example.com.crt
— SSL certificate for domain example.com
signed by custom CA;
example.com.key
— the key for SSL certificate.
You can use this certificate with NGINX like this:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/certs/example.com.key;
location / {
return 200 'Hello world';
add_header Content-Type text/plain;
}
}
HashiCorp products are no longer Open
Source.
I understand this decision, but it’s still sad.
Today I’ve found aerc. It’s a terminal email client that can be good
alternative to mutt. I’ll try to use it for a few weeks.
Finally I have returned to GitHub Pages.
I’d found an article “Now available: Fedora on Lenovo
laptops!”. It is said that Lenovo sells
ThinkPad X1 Carbon Gen 8, ThinkPad P53, and ThinkPad P1 Gen 2 laptops with Fedora pre-installed.
From the one side, it’s great for the Linux community, because pre-installed Linux means that all
laptop systems are tested and work properly. From the other side, Fedora provides updated packages
for approximately 13 months, so it’s not
the best choice for a pre-installed system.
All smartphones are similar to each other nowadays. It’s just a screen with several buttons on the
edge. Most of the time it’s OK, but some tasks require typing a lot of text, so an on-screen
keyboard is not the best solution. Several years ago, it was possible to buy a slider keyboard case
for popular smartphones, so most of the time you use your smartphone as usual, but when you want to
type a long email or execute several command over SSH you just slide a physical keyboard and use
your device like a PDA. It’s a bit strange that there is no such an option nowadays. Of course, it’s
possible to buy a Cosmo communicator or
PinePhone Pro with a
keyboard, but these devices are
for hardcore enthusiasts and it can be difficult to use them every day.
It looks like Mastodon is becoming more and more popular. Maybe in the future it will become a real
alternative to Twitter.
After the installation of language packages in Kubuntu 22.04, I’d found that LANGUAGE
variable had
been changed. There was nothing in /etc/default/locale
or ~/.pam_environment
, so it looked a bit
strange. I’d realized that KDE writes locale configs into ~/.config/plasma-localerc
, so I changed
settings there and reload session. Now I have the desired value of the LANGUAGE
variable.
After almost three years with Ubuntu 20.04 on my desktop, I’ve finally updated to Kubuntu 22.04.
Sometimes I come across an idea that today’s computers are too complicated, and it’s difficult to
understand them. People think that it was a lot easier in the 80s during the 8-Bit computer era to
grasp all of what is going on inside of any single machine, because the computers were relatively
simple and constrained.
I think it’s a controversial question. In the 80s, the computers were easier, but it was more
difficult to find the information about them. There was no modern Internet with Google,
StackOverflow and Open Source, so the computer enthusiasts spent a lot of time to understand
relatively basic things. Today, there are a lot of possibilities to understand how computers work.
There are books like
Computer Organization and Design RISC-V Edition
or university courses like
MIT Operating Systems Engineering OCW course.
Also, it’s possible to start your research with RISC-V microcontrollers that are a lot easier than
X86 CPU.
I know a lot of developers who do not have enough time or desire to explore internal mechanisms of
the frameworks and libraries they use every day and very few people even try to read the
documentation and the source code to go deeper. The same was during 80s: people just learn some
basic commands to start the game from ZX Spectrum cassette, but they did not understand how these
commands actually worked. If someone really wants to understand the computer’s internal mechanisms,
it’s possible to do. The only required thing is a desire to learn.
Never used savepoints in SQL in my
projects, but this feature can be really useful for some edge cases.
GENERATED ALWAYS AS IDENTITY
is a PostgreSQL construction that creates an identity column for
a table and forbids to specify a value for this field manually. E.g.
CREATE TABLE users (
id INT GENERATED ALWAYS AS IDENTITY,
name VARCHAR NOT NULL
);
INSERT INTO users(name) VALUES ('John Smith'); -- generates id value
INSERT INTO users(id, name) VALUES (42, 'Sam Smith'); -- throws an error
When I need to access my home network from remote locations, I use WireGuard. It’s fast and simple
to configure. In some countries, WireGuard is blocked by authorities. OpenVPN and L2TP/IPSec are
often blocked as well. The option that works almost always is Cisco AnyConnect. It’s possible to run
AnyConnect compatible server with ocserv. It is slower than
WireGuard, but it can solve the problem when everything else is blocked.
I’d tried DeepL. It produces accurate translations for my use cases.
KDE Plasma 5.27 has initial implementation of
window tiling.
In September 2020 I’d written an article Ubuntu Snap: the Price of the
Isolation. One inconvenience
with Snap was the updates mechanism: it was impossible to disable auto-updates and check for new
versions of software by hands. The epic with updates in Snap finally has ended:
- August 15, 2022: the PR that allows to hold
refreshes indefinitely for all the system’s snaps;
- November 15, 2022: it’s possible to disable auto-updates in Snap from the edge channel (here is
the post in snapcraft blog);
- January 10, 2023: snapd 2.58 released, it’s possible to disable auto-updates in Snap from the
stable channel.
It was a really long
discussion since
2017.
Interesting article about Open Source software
for creators. The author provides the annual recap and preview of FOSS projects across the
ecosystem: image editing, painting, photography, 3D, special effects, CAD, animation, video, and
audio.
Continue talking about self-hosted services. There
is a subreddit r/selfhosted where people discuss alternatives
to popular online services that can be self-hosted without giving up privacy or locking you into a
service you don’t control. In this subreddit I’d found a link to a repository
awesome-sysadmin that contains a list of
different Open Source admin tools.
Some people use GitHub’s Gist as a blogging platform. It seems like an interesting idea: you can
publish posts using built in Markdown, follow the other GitHub members and comment on their posts.
Gist has a lot of useful features for technical writing, like source code syntax highlighting, or
advanced formatting.
On the other hand, Gist is another vendor lock, like Twitter or Facebook. Actually, you do not own
your account and it can be terminated any time without violation of the GitHub user agreement. I
also use GitHub for this blog, but I use it as a static pages hosting platform, so if I will find
something better or GitHub will disable my account, I can switch to something else without losing my
blog.
Maybe it’s some sort of paranoia, but it’s better to do something before it will be too late.
I use Evolution mail client because it’s the only client that works well with Exchange. I used to
install Evolution using flatpak
, and it requires a password each time I start it. Today I’ve
switched from flatpak
version to apt
version and now Evolution starts gnome-keyring-daemon
and
uses it to store passwords. So I type my keyring password only once. It’s a lot more convenient.
I use NRF24L01 chips from time to time. It’s a
low-cost radio with small power consumption. That is why these chips are really useful for
communications between IoT devices. NRF24L01 exposes an interface for low-level communications
between two chips. Sometimes it’s enough, but the other tasks require higher-level network protocols
like TCP. Recently I’ve found an inspiring
post that explains
how to set up a TCP/IP stack over NRF24L01. The author had developed
nrfnet — an application that creates virtual interface on
Raspberry PI and uses NRF24L01 as a backend for data transmission. The speed of such connection is
not too high, but it allows to use software like SSH or web server.
The nrfnet project looks like a proof of concept that connects two Raspberry PI nodes. I’ve found a
more mature project about NRF24L01 Networking — TMRh20. It comprises a
plenty of libraries that allow to create a network of Internet enabled RF24/Arduino sensors by
providing an API similar to the Arduino Ethernet library. If you are interested in this subject, you
should definitely take a look at TMRh20s Project Blog.
It’s time to update my desktop at Kubuntu 22.04. To be sure that there will be no problems with
devices and the software that I use every day, it’s important to a check new distro before
installing it on the internal hard drive. I’d decided to install Kubuntu on an external
disk.
First, I’d used one of my USB flash drives. Unfortunately, all of them were extremely slow, that is
why I gave up on this idea. Also, I have one relatively old HDD that I used for backups several
years ago. Usually, HDD is much faster and reliable than USB flash drives, so it looked like a
suitable solution for my task. I’d bought 2.5" HDD enclosure and connected the HDD to my desktop. I
tried to install Kubuntu on it, but with no luck: it had shown an I/O error on the FS creation step.
An error reproduced on different distros, so it was not a distro-specific.
I thought the HDD was too old and had a lot of bad blocks, so I tried to run badblocks -svn -b 512 -c 65536 /dev/sda
to check the device. Unfortunately, it took an hour to check 0.1%, so I’d
canceled the operation. I was ready to give up on the whole idea, but suddenly I tried a different
USB cable, and it worked! Kubuntu had installed with no errors.
The moral: all parts of the chain can be faulty, so check all of them one by one, and it can help
you solve the problem.
Suppose you configure network settings on a remote Linux machine. The only way to access this
machine is an SSH connection. To prevent access problems in case of network configuration failure,
try such option:
- open
tmux
;
- write
sleep 600 && reboot
(you should cancel this command every 5-9 minutes and start again);
- perform network configuration in a separate tab.
It’s important to make only temporary changes that will be lost after a reboot:
- do not execute commands like
systemctl enable nftables
, use systemctl start nftables
instead;
- do not use
--permanent
option for firewall-cmd
.
In case of network configuration failure, your machine will be rebooted in 10 minutes. All
configuration changes will be wiped out and you’ll be able to connect.
ired.team — a site dedicated to pentesting, security tools and techniques.
Recently I’d read an article about Windows API
Hooking
from this resource. It describes a technique that can be used to intercept API calls. Malware
software often uses such technique to different malware software to inject malicious code into the
user’s process.
Nginx is a standard de facto in my projects. On the weekend, I’d read The Complete NGINX
Cookbook. I’d found some useful
tips about $request_id
variable or rate_limit
directive. I think the book is worth reading.
Most of the time, I use nginx as an HTTP load balancer, but it can also balance raw TCP and UDP
streams with
stream
block.
hping is a really flexible network diagnostic tool. It supports TCP, UDP,
ICMP and RAW-IP protocols, has a traceroute mode, the ability to send files between a covered
channel, and many other features. It helps to investigate network failures and fix its, so I think,
it’s a must-have instrument for every developer and admin.
Today I’d accidentally pressed some buttons and Firefox changed text alignment from left to right.
It was a bit confusing. If someone experienced the same issue and wants to return everything back,
it can be done with Ctrl-Shift-X
.
The required things for offline coding:
- Books about programming environment (OS, network, compiler, security and so on).
pdfgrep
to search through these books.
- zeal to browse documentation about programming language, web server and
different libraries like Qt.
- Full stackoverflow dump made by kiwix.
Do you know that it’s possible to download the whole Wikipedia or Stack Overflow with
kiwix? Just go to
library.kiwix.org, select required
resource and download the dump. It can take several hours, even on a fast connection. Then use the
reader to explore the resource offline. I prefer to install the
reader from Flatpak: flatpak install flathub org.kiwix.desktop
.
Kiwix is barely useful for fast Internet connection, but for some corner cases it can be extremely
important.
Today I’ve discovered Command Line Heroes — the
podcast from RedHat. It tells stories about different tech fields like security, programming
languages or open source. It’s more like an audio book than a radio show. I think it’s worth
listening.
Found chat.stackoverflow.com. Stackoverflow had reinvented IRC.
When you need to transfer Git repo, e.g. send it by email, it’s possible to use git
bundle. Create .bundle
file with git bundle create my-repo.bundle master
, send it, and then extract data with git checkout my-repo.bundle my-repo
.
Also, there is a Reddit
thread
about development environments for STM32 on Linux. So if you think STM32CubeIDE is not your choice,
you’d better look at this discussion.
Found a nice wiki page about STM32 Development
tools. There are some interesting alternatives to
the STM32CubeIDE. It’s worth reading.
Talking about Linux Kernel on Ubuntu 20.04 Server: found that it’s possible to install 5.15 LTS
manually using sudo apt install --install-recommends linux-generic-hwe-20.04
. Looks reasonable:
for server installations it may be dangerous to perform such updates automatically.
Switched from Adobe Acrobat Reader to Sumatra
PDF for viewing PDFs on my Windows machine. It’s
a lot smaller, faster and the source code is available on
GitHub.
I always thought that PNG is a file format for static images, but today I was searching for a new
emoji for Slack and found an animated image with .png
extension. Surprisingly, there is an
Animated Portable Network graphics (APNG) file format, and
it’s supported by the most of the modern web browsers.
Actually, the previous message is correct for desktop version only: Ubuntu Server 20.04 uses Linux
kernel 5.4.0.
Found that Ubuntu 20.04.4 and Ubuntu 22.04.1 have the same kernel 5.15.0. I thought that 22.04
should have a later version, like 5.19.3.
After updating to Kubuntu 22.04 my OpenVPN stop working with an error like:
OpenSSL: error:0A00018E:SSL routines::ca md too weak
Cannot load inline certificate file
Exiting due to fatal error
The correct way to solve this issue is certificate regeneration, but I do not control the server. So
the temporary solution is to add the line tls-cipher "DEFAULT:@SECLEVEL=0"
into ovpn
config
file. It allows OpenVPN to use weak tls cipher, so the connection starts as usual.
Found an interesting article about
curl’s options for connecting to the different host. Most of the time I’d changed Host
HTTP
header, and it was enough for my cases, but today I’ve realized that this solution is not acceptable
when for HTTPS resources. Here, I need to specify the host name during SSL connection negotiation,
so I can’t use HTTP headers. There is an SNI
field that allows to tell the server which
host I want to access. Curl uses URL to prepare SNI field value: for command curl https://example.com/foo
SNI value is example.com
and when I set Host
header, it does not affect
SNI at all. To change SNI the –resolve option can be
used:
curl --resolve example.com:443:127.0.0.1 https://example.com/
The command above populates curl’s DNS cache with a custom entry for the host name example.com
and
port 443
with the address 127.0.0.1
. That is why curl will use specified IP address to start TCP
connection and then use example.com
for SNI field.
And one more useful Vim command: to make my markdown files easy to read, I limit the line width with
100 chars and highlight longer lines:
set tw=100
2mat ErrorMsg '\%101v.'
The editor split lines automatically as I type, but if I add some text in the middle of the line, I
have to select a paragraph with vap
command and reformat it with gq
.
I write articles and docs using Vim. It’s convenient when the editor can check spelling on the fly,
so I can fix mistakes as soon as possible. Sometimes I use different languages in one file and I
want the editor to find spelling issues for all languages in it. It’s possible to set multiple
languages for spell check in Vim with a command like :set spelllang=en_us,de_de
, so the editor
will use several dictionaries. I prefer to underline incorrect works with :hi SpellBad cterm=underline
.
Update for the previous message: A moment ago I’ve realized that Ubuntu apt repo contains grip
4.2.0
. It’s an old version that generates strange pages. The best decision was to switch from an
old apt
version to the grip 4.6.1
from PyPi. It can be installed with such a command:
pip3 install --user grip
. Now the page looks great with no additional options in the config file.
I use grip to preview markdown documents locally. To make the
generated page looks like README on GitHub add such line into ~/.grip/settings.py
:
STYLE_URLS = ['https://cdnjs.cloudflare.com/ajax/libs/github-markdown-css/5.1.0/github-markdown.min.css']
It’s a CDN URL for github-markdown-css. This
library is a minimal amount of CSS to replicate the GitHub Markdown style.
Found an outstanding notes.vim plugin for Vim. It allows to
write some notes with basic formatting and navigation. The syntax highlighting of the source code
can be useful for notes of software developers.
Useful tip: it’s possible to execute some code on the Vim’s startup with -c
argument. I use it to
prepare Vim for blog post editing like this: vim -c 'call ConfigDoc()' post.md
. I don’t want to
add this function call into .vimrc
for all .md
files, because there are a lot of different
markdown files that do not require this configuration. So command line argument does the trick.
I tried to download a file with curl on Oracle Linux 8 and got an error:
routines:ssl_choose_client_version:unsupported protocol
. It’s an old host and I have no access to
upgrade the software. So I had to relax my encryption settings with
update-crypto-policies --set LEGACY
. It’s not the best way to solve this issue, but if you need to
download a file and you don’t have any other options, it is worth doing.
The easiest way to find Ubuntu installation date is checking the installer’s directory:
ls -lt /var/log/installer
.
Sometimes I need to test new software or prepare isolated stands. I use
VirtualBox for these tasks. I want to access VMs from the host PC and
the easiest way to achieve this is by using bridged network adapters. Unfortunately, VM with a
bridged network is exposed not only to the host but also to all machines in the network. VM relies
on the router DHCP. It’s can be unacceptable for some locations.
Today I’ve found
how to access a NAT guest from host,
so I can forward VM’s port to the host and do not expose the VM for the entire network.
Found an outstanding video about VGA signal. Ben
Eater explains what VGA display actually does when it receives an input, how it processes this input
to display pixels on the screen and how to make a circuit using primitive ICs that can generate such
signal for the display.
I have several devices that I use every day: PC, laptop and phone. I want to share some notes
between them, but I don’t want to use cloud solutions like Google Docs. So today I’ve installed
Wiki.js instance on my Ubuntu server. It works fast, uses a few resources on my
workloads and has a lot of useful docs.
This article about sudo rules troubleshooting helped me
a lot with FreeIPA configuration. It was difficult to understand what was going on before I turn
debug logs of the SSSD on.
Vim 9.0 had been released several days ago. I started using Vim
when the latest version was 6.4. The modes idea and navigation with j, k, k and l keys were
unfamiliar to me and I’d spent a lot of time to get used to it. Now it’s something natural, so
sometimes it’s difficult to use mainstream editors without Vim features. The most annoying thing
with Vim 6.4 was the lack of editor tabs. I knew that there were
buffers, but I did not like it. That is why I was extremely
happy when Vim 7, with tabs support, was released. I rarely use this feature nowadays. What a twist
of fate.
One task that I perform these days is the configuration of the FreeIPA domain. Sometimes it’s not
trivial, so I need to debug some sort of quirky errors from time to time. Today I’d found
Thomas C. Foulds’s blog that contains a lot of useful information
about FreeIPA debugging. It’s definitely worth reading.
It’s possible to connect to a remote host over SSH using a public key. It’s a well-known feature
which I use every day. SSH protocol is also used for the git clone
command. I was curious about
how Gitea implements the support of the SSH protocol. The first idea was about the custom SSH
server. I’d inspected my local Gitea installation and found that there was no custom SSH server
running, just the default sshd
. Surprisingly, when I performed the command ssh git@gitea
I got
the message:
Hi there, <username>! You’ve successfully authenticated with the key named <key name>, but Gitea
does not provide shell access. If this is unexpected, please log in with password and setup Gitea
under another user.
I’d looked at the source code of the Gitea and found
serv.go.
This module allows to perform git clone
over SSH, but who calls it? I looked at the
.ssh/authorized_keys
file in the Gitea home directory and get such a line:
command="/usr/local/bin/gitea --config=/etc/gitea/app.ini serv key-1",... ssh-rsa ...
man authorized_keys
explains that command
option specifies the command is executed instead of
the default shell. So when I write git clone
git starts SSH connection and OpenSSH server checks
my key and if it’s OK starts Gitea serv
module.
This OpenSSH feature allows to restrict certain public keys to perform just a specific operation.
E.g. I can create a key and specify the command that calculates some server statistics. So if this
key is compromised, the intruder can get only server statistics but can’t execute arbitrary code.
There are a lot more options for the authorized_keys
file. I think the man page is worth reading.
I got a lot of new info about the utility I use for a long time.
Ubuntu 22.04 distributes Firefox as a snap package. It starts slower than Firefox installed as a
deb-package and it’s difficult to control updates of the browser. It’s possible to install Firefox
as a deb package from Mozilla Team PPA.
There is a
useful guide from omgubuntu.co.uk
about how to configure the system to use this PPA.
Most of RPM-based Linux distributions use
frontends like yum or
dnf. Most of
deb-based Linux distributions use apt-get,
aptitude or apt. Surprisingly, there is an APT-RPM package
manager — a version of apt-get modified to work with RPM.
Annoying bug: it’s
impossible to add a route without a gateway in NetworkManager GUI in Ubuntu 20.04. Fortunately,
there is a workaround: use 0.0.0.0
as a gateway.
Systemd allows to pass a single argument to a service. This feature is called
Service Templates.
It can be used for such applications as
OpenVPN (the argument is a connection config
name) or PostgreSQL (the argument is a cluster version).
Kubuntu 22.04 has libssl 3.0.2, that is not compatible with libssl from old releases, so Viber
messenger is not working. It shows an error message “No Connection” even the other network apps like
Firefox work well. I think this bug will be fixed in future Viber versions. For now, it’s possible
to perform a hotfix:
Install libssl 1.2.1 from Ubuntu 21.10:
wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1l-1ubuntu1.2_amd64.deb
sudo dpkg -i libssl1.1_1.1.1l-1ubuntu1.2_amd64.deb
Preload libssl 1.2.1 into Viber. Replace line that contains Exec
instruction in
/usr/share/applications/viber.desktop
with this:
Exec=LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libssl.so /opt/viber/Viber %u
I’ve just updated from Kubuntu 21.04 to Kubuntu 22.04 on my laptop. From the
release notes I found no breaking
changes, but only minor updates for different programs. Everything works as expected.
It’s convenient to monitor network activity in the terminal. There are several tools for this task:
bmon, slurm or tcptrack. I prefer to use slurm.
Some people prefer to use self-hosting services instead of consuming from SaaSS providers. It’s
useful to look through the
comprehensive list of different software
which can be hosted on your own server.
What’s The Deal With Snap Packages?
Snap package system offers a controversial approach for managing software. It’s definitely not a
silver bullet. Two years ago I wrote
an article with such a point
of view and today I’ve found an article that shares my point of view in some way. It’s definitely
worth reading. I think the user should control the system and package manager should allow to do it.
KernelNewbies is an extremely useful resource for those who want to
explore kernel changes but yet not ready for reading lkml.
From time to time I have to check my disk usage. It’s possible to use df
and du
for rough
estimate, but it’s more convenient to have some UI for further analysis. I prefer to use ncdu
.
Some other tools are described in the
article.
OpenSnitch is a GNU/Linux port of the Little Snitch
application firewall. Tried it today on Kubuntu 21.04. It works as expected, so if you got used to
Little Snitch on MacOS and switched to Linux you should definitely give it a chance.
Looked at Gitea as a self-hosted alternative for GitHub. It looks nice: a
lot of useful features like global code search and template repositories. There is a
comprehensive comparison of Gitea to other Git hosting
solutions. It can help decide if Gitea is suited for your needs or not.
The only thing that surprised me a bit: the
information about package registry is already in
the official docs, but PR with this feature was
merged only several days ago, so it will be available only in v1.17.0. The most recent release for
now is v1.16.5.
Surprisingly found that Linux has multiple routing tables and set of rules that tell the kernel how
to choose the particular table for each packet. There are an
article and a
reddit discussion
explaining this subject.
Today I’ve created a test stand that consisted of several Debian VMs to learn some basic network
management and firewall and VPN configuration. It looks astonishing. My primary job is writing
software, that is why I configure Linux networks rarely, and it’s a bit complicated for me. Maybe a
bit later I’ll write an article about all this stuff.
Really awesome video about hacking the Nintendo Game
& Watch. It uses STM32 locked processor and AES-CTR encrypted flash, but it had not helped, and the
console was hacked one day before release.
I’d used CentOS as a production environment for working projects. Some time ago Red Hat announced
its plans to replace stable CentOS 8 with rolling release
CentOS Stream. So, CentOS had been replaced with
Oracle Linux. Today I’ve found that Oracle has its own Linux kernel
build that is called
Unbreakable Enterprise Kernel. They add some
features like Ksplice that look interesting for high-load
enterprise platforms.
Unexpectedly discovered a lot of paid proprietary apps in snapcraft. I think
it’s great because Linux users and developers have more choices.
Do you feel like you getting old and new technologies do not excite you anymore? Do you think that
you have seen most things before? I’ve found an
interesting discussion on Hacker News about this
topic. Different people share their feelings and ways of getting through this period of life.
Falling down the rabbit hole of HiDPI screens and fractional scaling on Linux desktops I came across
an interesting discussion on Reddit.
It contains a lot of useful links that help to form an opinion on the subject.
I found that in the KDE X11 session fractional scaling is implemented on Qt Framework level, so each
app scales its output itself. Gnome uses a different technique. It implies rendering everything at
an integer factor and then downscaling using a raster operation. It’s a more universal approach and
does not depend on the GUI framework, but it can lead to some sort of degradation for the font
rendering. It can be almost invisible for Apple-like retina displays with a 1.75 scale factor but
it’s notable on the average 13" - 14" laptop 1080p screen.
I think the best way to avoid HiDPI rendering issues is using such displays that allow setting
integer scale factors like x2 on a 14" screen with 2880х1800 resolution. All other techniques will
be always the compromise between GUI framework requirements, multi-screen setup support,
performance, and final picture quality.
The majority of GUI apps on Linux are written using some toolkit like GTK or
Qt. Such libs provide high-level abstractions like buttons or labels but the
real rendering is executed on the backend like X11 or
Wayland. I’d never thought about it. Today I’ve found that the
X11 backend uses Xlib. To
understand what level of abstraction GUI libraries provide, you can take a look at
this small tutorial. After writing a Hello World with
Xlib it becomes clear for me why developers introduce more abstraction layers.
I’m learning the STM32 platform. The code generation in
STM32CubeIDE makes it easier to
configure different MCU subsystems and start writing “hello world” apps. However, it can be
difficult to understand how all this magic works. Fortunately, there is a comprehensive explanation
in the
STM32CubeMX for STM32 configuration and initialization C code generation
user manual in section 6.1.
To make L2TP/IPsec VPN client work under Kubuntu install network-manager-l2tp
package.
Today I’ve found that youtube-dl downloads video too slow.
Switching to yt-dlp helped: 10MiB/s instead of 50KiB/s.
Some time ago I’d bought Focusrite Scarlett gen3 audio interface because it’s one of the devices
that work on Linux out of the box. Unfortunately, some features of the device can be accessed only
through Focusrite Control that is not available on Linux.
Recently I’d found
an interesting stream about improving Focusrite Scarlett Driver that
shows how to access additional features of the device on the Linux platform. It is worth watching if
you are interested in reverse engineering and Linux drivers.
I have a network smb share with different videos that I want to watch over network on my devices.
Today I’d tried to watch a video on my Kubuntu laptop with VLC. It prints errors like:
Your input can't be opened:
VLC is unable to open the MRL 'smb://10.0.0.1/Share/video.mkv'. Check the log for details.
I’d checked logs and found nothing. I’d tried to set username and password in the settings as
suggested on the Internet with no
results. I’d also tried to install kio-fuse
as
described on Reddit
— nothing changed. So I think, there is some bug in the VLC, that is why I used a different
solution:
- Open
/usr/share/applications/vlc.desktop
.
- Remove line
X-KDE-Protocols=ftp,http,https,mms,rtmp,rtsp,sftp,smb
.
- Make sure that
kio-fuse
installed.
X-KDE-Protocols
instructs Dolphin to pass smb URLs directly to VLC. Without this line, kio
mounts
smb share and allows VLC to work with a local file.
Today I’ve found Albert. It’s a great Alfred-like app for
Linux.
Zoom has become the de facto standard for online communications for me. On Linux
Zoom depends on ibus. It looks a bit strange because Zoom works fine without this dependency and in
some cases, ibus can make problems. There is a nice article about how to
repack Zoom to work without ibus.
Recently, I’ve found a page, explaining the types of packages
in Ubuntu. It gives the information about strengths and weaknesses of each packaging system and
discusses what is the best choice for different usage scenarios. The article gives me a lot to think
about. Although certain solutions seem controversial to me, the article clarifies the vision of
Canonical for package systems.
Is it time to switch from Docker to Podman?
I have a dual boot setup on my PC: Ubuntu and Windows 10. Sometimes it’s required to reload Windows
for processing system updates. By default, Grub loads the first system from the list, which is
Ubuntu. It’s inconvenient. I think it’s better to make Grub remember the last loaded OS and start it
after reboot.
To achieve this behaviour you need to perform such steps:
-
Add to /etc/default/grub
following strings:
GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
-
Execute sudo update-grub
.
Found KDE Timeline. Looks interesting. The times of KDE 3 were really
awesome.
Recently I’d found that ping
utility on Ubuntu 20.04 and Ubuntu 21.04 works without root
permissions, suid flag, or CAP_NET_RAW
capability. In the
Kernel documentation it is said
that ping
uses ICMP_PROTO
datagram sockets and it’s possible to allow users without root
permissions to create such sockets:
ping_group_range - 2 INTEGERS
Restrict ICMP_PROTO
datagram sockets to users in the group range. The default is “1 0
”,
meaning, that nobody (not even root) may create ping sockets. Setting it to “100 100
” would
grant permissions to the single group. “0 4294967295
” would enable it for the world,
“100 4294967295
” would enable it for the users, but not daemons.
I’d checked /proc/sys/net/ipv4/ping_group_range
and found 0 2147483647
interval.
Also, there is a
code example
that demonstrates the use of ICMP_PROTO
sockets.
I use Thunderbird as an e-mail client. I’d configured several rules on the server side for splitting
messages from INBOX into several folders (Jira, GitLab, etc.). It’s important to execute these rules
on the server side because I have several clients and, I don’t want to make the same rules for each
client. By default, Thunderbird checks only the INBOX folder of the IMAP account, so I had no
notifications about new mail in the other folders. Recently I’d found that it can be fixed.
To make Thunderbird check for new messages in all folders:
- open Thunderbird preferences;
- open Config Editor;
- search for:
mail.server.default.check_all_folders_for_new
;
- change value from
False
to True
.
There is a Full HD screen on my 14" laptop. It looks like some sort of HiDPI for me, and that is why
controls of an interface on native resolution are too small, so it’s difficult to use. On Windows,
an interface was scaled for 125% and looks a lot better.
I tried to make the same settings on Ubuntu 21.04, and there were some issues. There are two
options: a new Wayland session and a more traditional Xorg session.
When I set up a 125% scale for the interface in the Wayland session, it looks great, but some apps
like Google Chrome, VS Code, or Slack look blurry out of the box. For some apps, it’s possible to
fix it in some way using experimental features, that can lead to instability and random crashes.
Also, there is
an extremely annoying bug in Firefox: some
popups like Multi-Account Containers are cropped and it’s impossible to use them. It will be fixed
only in Firefox 93 which will be released in October. There are a lot of small issues on Wayland
like Vim clipboard, that is why I’m not sure that Wayland
is ready for desktop now.
When I set up a 125% scale for the interface in the Xorg session all apps work properly, but there
are some artifacts in the interface like black rectangles around windows or black lines in random
places of the screen. It’s better than Wayland but still not good enough.
I’d tried different Linux distributions and different desktop environments to find the best solution
for cases like mine. Xfce and Cinnamon work fine only with x2 and x3 scales. Elementary works with
x2 scale and allows to increase fonts, so texts look fine, but controls are too small. The
co-founder of Elementary
says it’s OK and I just need to buy another laptop.
The solution I found is Kubuntu 21.04 with KDE 5.21.4. It allows setting 125% scale out of the box.
Everything works: Qt apps, GTK apps, Firefox, Google Chrome, and so on. That is why now I’m a happy
KDE user.
I’ve used to ask questions about different open source projects on Freenode IRC several years ago.
Today I’ve found that Freenode staff members had left the company and started
Libera.Chat. You can find the roots of this decision on
Wiki page.
Recently Docker
has updated subscriptions, and now
Docker Desktop remains free only for individuals or small businesses. I think it’s OK that Docker as
a company wants to get money from their products. At the same time, it’s a bit strange to change the
rules of the game for existing products. We’ll see how the community reacts to this announcement.
One of the most important parts of laptop security is disk encryption. It can help to save your
personal data if you lost your computer or it was stolen. It’s impossible to encrypt everything. You
need some code to ask for the password from the user and decrypt the system.
There are several options when you work with Ubuntu:
- encrypt only home directory;
- encrypt the operating system partition during the installation process and rely on UEFI Secure
Boot for Linux kernel verification;
- encrypt not only the operating system partition, but also the boot partition; look at
Full Disk Encryption Howto
for more details.
It’s important to remember that disk encryption is not free. There is always a performance hit if
you use full disk encryption, but it can be unnoticeable on modern hardware. There are some tests
from Phoronix about disk encryption:
I have been using macOS on the laptop for a few years, but recently I switched back to Ubuntu. It
has several advantages for my usage pattern: the environment which my programs use in production,
Docker with fast FS and without imposed updates, and
a lot more control over the system.
One thing that I miss is Time Machine. Having up-to-date backups is extremely important. It helps
when your computer stops working normally, or it has been stolen, or you just want to move from one
laptop to another.
Of cause, there are a lot of different backup solutions on Linux. I’d found articles like
The 10 Best Linux Backup Tools,
The 15 Best Backup Software For Linux Desktop,
or even
25 Outstanding Backup Utilities for Linux Systems in 2020.
All Linux backup utilities can be divided into two groups.
The utilities from the first group allow you to make incremental backups, data encryption and you
don’t have to stop working during the backup creation, but you have to select a directory to backup.
Usually, it’s a home directory because there are some pitfalls that stop me from selecting /
. That
is why when you need to restore from the backup you have to install fresh Linux distribution
yourself with all your programs and only then restore your data like documents or photos from the
backup.
The second group includes the utilities that make the backup of the entire hard drive and you can
restore the entire system from such a backup. Unfortunately, you can’t use the computer during the
backup creation process and I’m not sure that it’s possible to make incremental backups in this
case. When you need to restore from such a backup you can restore only all or nothing. It can be
inconvenient.
I want to find a tool that can backup my data from the home directory but also saves the system
configuration somehow. When I need to restore from the backup the tool should help me to install the
system with all settings and then restore the user data. Unfortunately, I’ve not found such a tool
yet, so I have to use Gnome’s default backup solution —
Déjà Dup.
Sometimes I find interesting links or have some thoughts about software engineering problems that I
want to save and share. It’s not enough for an article in the blog, so I don’t publish it there.
That is why I’d created this page, and I’m going to post some short messages here.
Hello world!