The first step was to enable log_lock_waits
, one of the runtime flags. In Cloud SQL that’s just one of the default database flags you can flip to true. It doesn’t require a restart.
After keeping that in production for a while you can query the instance through the Log Explorer with something like the following:
resource.type="cloudsql_database"
resource.labels.database_id="db"
log_name="projects/company/logs/cloudsql.googleapis.com%2Fpostgres.log"
textPayload:"ExclusiveLock"
If your locks are hitting the (default) 1 second threshold they will log something like:
db=db,user=db_write LOG: process 2026853 acquired ShareRowExclusiveLock on relation 19062 of database 16448 after 3267.563 ms
Then you can look up the relation id like select 19062::regclass;
and it’ll tell you which table is affected. This should give you a good start for your investigation.
A great article about the topic can be found as always on the pganalyze blog.
]]>All videos are available on my YouTube channel which I called Infrequent Updates. In retrospect the name is quite accurate.
Berlin to Hong Kong, Star Ferry, walking around and taking pictures of taxis
Hong Kong Park, Peak Tram to Victoria Peak, Hiking down and Kennedy Town
Macaroni Soup for breakfast, visiting old Kai Tak Airport, Checkerboard Hill and finding an old Taxi
Breakfast at Blue Bottle Coffee, walking stairs, Crew Coffee Shop and the burned Kimpton hotel
Fast boats in the park, a very small TV and a hike to Big Wave Bay Beach (Big waves, many stairs!)
Running in Happy Valley, fast bus to Clear Water Bay Beach, Ferraris and Ding Dings
Symphony of Lights, Ice cream truck music, taking ferry at night and driving to Tai Po
Hiking Victoria Peak to Kennedy Town in the fog for 4 hours
October in Berlin. Walking from Kreuzberg to Mitte
Fall in Berlin with leaves and trams
I do still enjoy creating these little visual diaries and will continue to publish new videos on the channel.
]]>As a next step I looked into setting up my own relay and it’s surprisingly simple. I now have a relay based on github.com/scsibug/nostr-rs-relay running at wss://nostr.notmyhostna.me
.
I was looking for a simple Go + Postgres relay project but that doesn’t seem to exist yet. This is a fast-moving landscape right now though, so this might be outdated information in a week.
If you are using docker-compose
it’s a simple as the following snippet, additionally you’ll have to configure your reverse proxy (nginx, in my case) to point that container.
nostr:
image: scsibug/nostr-rs-relay:latest
restart: always
ports:
- "7000:8080"
volumes:
- /home/ubuntu/services/nostr/nostr-config/config.toml:/usr/src/app/config.toml
- /home/ubuntu/services/nostr/nostr-data:/usr/src/app/db
The relay starting up and listening to web socket connections:
Feb 07 19:39:39.606 INFO nostr_rs_relay: Starting up from main
Feb 07 19:39:39.607 INFO nostr_rs_relay::server: listening on: 0.0.0.0:8080
Feb 07 19:39:39.610 INFO nostr_rs_relay::repo::sqlite: Built a connection pool "writer" (min=1, max=2)
Feb 07 19:39:39.611 INFO nostr_rs_relay::repo::sqlite: Built a connection pool "maintenance" (min=1, max=2)
Feb 07 19:39:39.612 INFO nostr_rs_relay::repo::sqlite: Built a connection pool "reader" (min=4, max=8)
Feb 07 19:39:39.612 INFO nostr_rs_relay::repo::sqlite_migration: DB version = 0
Feb 07 19:39:39.810 INFO nostr_rs_relay::repo::sqlite_migration: database pragma/schema initialized to v15, and ready
Feb 07 19:39:39.810 INFO nostr_rs_relay::repo::sqlite_migration: All migration scripts completed successfully. Welcome to v15.
Feb 07 19:39:39.811 INFO nostr_rs_relay::server: db writer created
Feb 07 19:39:39.811 INFO nostr_rs_relay::server: control message listener started
I used iris.to to write my first “post”. What I found scary is that for all clients you have to use your one-and-only private key as the password to log into your account. I didn’t look too much into existing plans to have finer grained “sub-keys” in the future, but I’m sure that’s something that’s being discussed.
Here’s what an event payload looks like, every field is explained in the protocol specification here.
{
"content": "Testing Nostr!",
"created_at": 1675799677,
"id": "e57401dfbe49cd199e60e0b3c4485b96c8286980f07bc9513a66ec21f081d809",
"kind": 1,
"pubkey": "6e5d92642b2a5e03ff59b50ff14b5c54a08ceceb465146985b8ffa3527523c8b",
"sig": "fdf1b3f635cd706b216970c86ec3db35e075369d4feabe64e3789c695bf18dabb2468424aece8885bcad6c8f0aa2d786d055d3c940a2b7da4d5550d6d2555830",
"tags": []
}
Replies to that post will then reference this id in their tags
:
{
"created_at": 1675799932,
"pubkey": "fe2d5cf62e95aab419b07b6f8a7b75d3cb3066fae25c6b44ace0f9f30c59303d",
"kind": 1,
"content": "We hear you!",
"sig": "b8313fcc6644fe8cd7841da7e0dcb0381f3155c7b3802fc54d655e60808f29b88c513db3c45c323e6738abf44d96f0e3866893858ae54fc230f971f1c93ca7d9",
"id": "21b120394430ad51188c6fe62632ecb269d41bb5ae9bc6a18a90448e864c6932",
"tags": [
[
"e",
"e57401dfbe49cd199e60e0b3c4485b96c8286980f07bc9513a66ec21f081d809"
],
[
"p",
"6e5d92642b2a5e03ff59b50ff14b5c54a08ceceb465146985b8ffa3527523c8b"
]
]
}
The best way I’ve found so far is nostr.directory. This tool scans people you follow on Twitter and checks if they posted a “proof” tweet to verify their nostr.directory entry.
The project is still in a very early stage. It is more confusing than Mastodon for non-technical newcomers but from a technical point of view it’s very simple.
The iOS app Damus looks more polished than expected, but is also buggy. I was not able to add my own relay to it, for example. That there’s an app at all in the iOS app store at this early stage is a big plus though. It makes giving this a try so much easier.
For now I’ll keep an eye on the project and keep my relay running. I’m looking forward to seeing more projects being built on this protocol.
]]>I started my investigation by looking at the installation log with the following command while clicking through the installation dialog.
tail -f /var/log/install.log
From the logs it became clear that it’s a permission problem. The installation process tried to copy data from the installation sandbox to /Library/Extensions/EPSONUSBPrintClass.kext
. This failed with: operation not permitted
.
I tried running the package installation with sudo
, restarting and updating macOS but that didn’t fix it.
After checking this directory I saw that EPSONUSBPrintClass.kext
was created in 2014 and probably got carried over through many macOS updates. Somewhere along the way permissions likely got messed up.
I decided to delete the file and I tried to just delete the file while macOS is running, this doesn’t work via the Finder or through the Terminal as the Library volume is read-only in the new macOS versions.
Solution
The solution is to start in recovery mode by holding Command (⌘)-R while starting your Mac. Then navigate to Utilities > Terminal and start a Terminal. Everything in recovery mode is a bit slower so be patient.
Navigate to the directory where the kext is located. Keep in mind that it’s not in /Library
but /Volumes/Macintosh HD/Library
while you are in recovery mode. Replace Macintosh HD with the name of your boot volume. Both directories exist but /Library
won’t have the non-standard kernel extensions (kext) files so this might be confusing.
Run rm -rf /Volumes/Macintosh HD/Library/Extensions/EPSONUSBPrintClass.kext
to delete the wrong file. Reboot the Mac and re-install the driver. Now everything should work!
Unfortunately the interface will only show “The video file is not compatible” while only the actual API request in the background will tell you exactly what is wrong. If you know where to look, that is.
On of the rules is a max frame rate of 60fps. If your video has more you have to change that.
ffmpeg -i your-high-framerate-video.mp4 -c copy -f h264 raw-stream.h264
With the -r
flag we can set frame rate (Hz value, fraction or abbreviation)
according to the documentation.
ffmpeg -r 60 -i raw-stream.h264 -c copy your-60-fps-video.mp4
Now we can upload the video to Twitter. If you want to know what’s the real reason your video doesn’t work you can check that in the network inspector. Look for the request hitting the upload.json
endpoint.
This is one of these posts I’m writing for myself so future-me can copy paste from here.
]]>On the technical side it’s a Rails app which relies on Sidekiq to process background jobs. Bookmarks have to be fetched, accounts have to be refreshed, tokens have to be kept fresh. To make sure all of that happens and there’s no spike of errors that goes unnoticed I’m using AppSignal.
I was looking for a way to send specific, critical alerts to my phone as push notification without relying on another paid service like PagerDuty.
After checking the available integrations on AppSignal I was happy to see that they support Webhooks. Webhooks are requests that are sent based on an action. You define a http endpoint on service2 and service1 will perform a post request to service2 once a specific event occurs.
For the push notifications I already use Pushover to receive notifications from other services like Sonarr or Plex on my phone. Pushover doesn’t have a Webhook receiver, that means we need to use another service to receive the Webhook from Appsignal, extract the necessary information and then trigger the Pushover notification.
I found a service called Pipedream which does exactly that and offers a bunch of other integrations. Setting it up is easy but you have to provide an example payload (The one that AppSignal would send) to configure which fields you want to use in the Pushover message.
You can find examples for the various hooks in AppSignal’s documentation. Copy these JSON responses and paste them on Pipedream. Then you can set up which variables should be used in the push notification in the interface shown below.
That’s already it, after that set up which alerts or exceptions should be sent via push notifications on AppSignal. Our final setup looks like this:
If you are interested in signing up feel free to use my affiliate link for AppSignal.
]]>I wrote up a short guide on the JustWatch blog in case you are curious.
]]>I used to rip the audio of my favourite movies and listen to them like audiobooks. Because I was usually familiar with the visual parts, I could enjoy the dialogue and the music in a different way.
This sounded like a simple enough task with FFmpeg, so I wanted to give it a try with my favorite movie.
First step is to check what kind of tracks are in the movie file. The best tool for that depends on your source file, in my case that’s an MKV file so I used mkvinfo
.
mkvinfo My\ favorite\ movie.mkv
It will give information about the video and audio tracks available in the file. Being interested in the audio track only we look for a section showing Track type: audio
. It will look similar to that:
| + Track
| + Track number: 2 (track ID for mkvmerge & mkvextract: 1)
| + Track UID: 2018713736
| + Track type: audio
| + Codec ID: A_DTS
| + Name: 5.1 DTS 1510 Kbps - DTSHD MA Core
The important part is the information about mkvextract
which tells us that it is “track 1” if we use mkvextract
to extract the audio track.
We use mkvextract
to extract the audio track from the file and define that we want to extract the first track (tracks 1:output.dtshd
) and store it as output.dtshd
. The file ending doesn’t matter.
mkvextract My\ favorite\ movie.mkv tracks 1:output.dtshd
Next step is to transcode that file from DTS-HD to something that’s more portable like ALAC or FLAC.
ALAC:
ffmpeg -i output.dtshd -acodec alac my-favorite-movie-as-an-audiobook.m4a
FLAC:
ffmpeg -i output.dtshd -acodec flac my-favorite-movie-as-an-audiobook.flac
If you want to double check the bitrate / sampling rate of the FLAC file you can use metaflac
like metaflac --list my-favorite-movie-as-an-audiobook.flac
to inspect the file.
As Manu correctly pointed out on Twitter you can also combine some of these steps into one.
]]>Also don’t have to do a separate extraction step, ffmpeg can do it all in one go: “ffmpeg -i input.mkv -map 0:1 -acodec alac output.m4a”.
The reason for going managed was that I didn’t want to deal with backups and have the option to scale up with the click of a button.
I couldn’t find a simple guide on how to move from a self-hosted instance to a managed instance on DigitalOcean and decided to write a short summary.
Goal: Move database from source instance to target instance with little read/write downtime.
Out of scope: No read / write downtime
Start by creating a new Postgres instance on DigitalOcean. This will take a couple of minutes. After this is done verify that the IP you are accessing the database from is added to “Trusted sources” in the control panel.
Once the database is running log in with the doadmin
user that DigitalOcean is displaying in the interface. I prefer to use a GUI client like Postico to query the database.
Run the following command to create the database and the role you’ll be using to access the database. The naming doesn’t matter, it’s a personal preference.
create role birdfeederdb_prod_write WITH createdb password 'some-very-secure-password';
create database birdfeederdb_prod;
Create the schema under which the tables will be created. The new role will get the permissions to use the schema and create tables.
create schema birdfeederdb;
alter role birdfeederdb_prod_write SET search_path = 'birdfeederdb';
grant usage on schema birdfeederdb to birdfeederdb_prod_write;
grant create on schema birdfeederdb to birdfeederdb_prod_write;
alter database birdfeederdb_prod owner to birdfeederdb_prod_write;
alter schema birdfeederdb owner to birdfeederdb_prod_write;
Stop all writes to the old database. In my case I shut down my app and everything accessing the database.
Replace the value behind the -h
flag with the IP or hostname of the source database. You have to provide the username and the database name. It’s likely that this is the same information that your app is currently using to access the database.
pg_dump -h 10.0.0.1 -U birdfeederdb_prod_write -p 5432 -Fc birdfeederdb_prod > birdfeederdb_prod.pgsql
Now it’s time to import the backup into the new managed Postgres running on DigitalOcean.
pg_restore -d 'postgresql://birdfeederdb_prod_write:some-very-secure-password@your.instance.hostname.db.ondigitalocean.com:25060/birdfeederdb_prod?sslmode=require' --no-owner --role=birdfeederdb_prod_write --clean --jobs 4 --if-exists birdfeederdb_prod_.pgsql
Not using the --if-exists
flag will result in seeing non-critical errors. More about that can be read in the Postgres documentation.
Now you can update the hostname/port in your app and it will start talking to the new database.
]]>users
table.
When I tried to truncate my users
table I noticed that my foreign keys don’t have an ON DELETE
action set. In that case deleting a row that is referenced from another row would fail instead of cascading the delete action further up the tree.
I was wondering what’s the best way to see the dependencies between entities in Postgres and found pg_depend which does exactly that.
with recursive chain as (
select classid, objid, objsubid, conrelid
from pg_depend d
join pg_constraint c on c.oid = objid
where refobjid = 'users'::regclass and deptype = 'n'
union all
select d.classid, d.objid, d.objsubid, c.conrelid
from pg_depend d
join pg_constraint c on c.oid = objid
join chain on d.refobjid = chain.conrelid and d.deptype = 'n'
)
select pg_describe_object(classid, objid, objsubid), pg_get_constraintdef(objid)
from chain;
Source: Thanks to klin on StackOverflow for that neat snippet.
pg_describe_object | pg_get_constraintdef |
---|---|
constraint fk_rails_c1ff6fa4ac on table bookmarks | FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE |
The bookmark mapping table maps a tweet to a user. If I would delete the user from the users
table that is referenced in the bookmarks
table it would fail.
CREATE TABLE birdfeederdb.bookmarks (
id BIGSERIAL PRIMARY KEY,
tweet_id bigint NOT NULL REFERENCES birdfeederdb.tweets(id),
user_id bigint NOT NULL REFERENCES birdfeederdb.users(id),
);
We have to add a migration to tell the database that it can also delete the row if the row it’s referencing got deleted (“If someone deletes the user, delete the bookmarks of that user”).
After running the migration the schema will look like this:
CREATE TABLE birdfeederdb.bookmarks (
id BIGSERIAL PRIMARY KEY,
tweet_id bigint NOT NULL REFERENCES birdfeederdb.tweets(id),
user_id bigint NOT NULL REFERENCES birdfeederdb.users(id) ON DELETE CASCADE,);
If you use Rails the migration is as simple as that:
class AddCascadeToBookmarks < ActiveRecord::Migration[7.0]
def change
remove_foreign_key "bookmarks", "users"
add_foreign_key "bookmarks", "users", on_delete: :cascade
end
end
Now you can just delete a user from the users
table and the database will take care of cleaning up the entities that reference that user.
Spoiler: It’s a boring Rails app and it’s good that way.
This would be a straight forward project if you could use the Twitter API for it. Unfortunately this is not supported by the old Twitter API and not yet implemented in the new one. Luckily it’s on the roadmap for the Twitter API V2.
Because of that limitation I had to build a browser extension that collects the bookmarks while you interact with your Twitter account in the browser. Right now there’s an extension for Google Chrome and one for Mozilla Firefox. I wrote about my experiences with building the extension in another blog post in case you are curious.
The backend of Birdfeeder is built using Rails 7, Postgres, Redis and Sidekiq. The browser extension is using cookies to figure out the Birdfeeder User ID and Twitter User ID (That’s why you have to be logged into both Twitter and Birdfeeder). Periodically it submits new bookmarks to the getbirdfeeder.com/bookmarks endpoint where they will be stored if they don’t exist yet.
When a bookmark is stored we have to fetch the meta information for the Tweet ID. This is done through a cron job which executes a Redis-backed Sidekiq task. This task is using the user’s Twitter information to fetch metadata for the tweet that will then be displayed in the weekly email. Using the user’s Twitter token to fetch this information makes sure that they can see bookmarked tweets that are private and only visible to them.
If there’s one thing I learned from this project it’s that building email templates is hard if you have to do it by hand.
Luckily I found a framework called Maizzle that makes it as easy as it gets. It is based on Tailwind and compiles your template into email-ready HTML with inline CSS (HTML Emails have a lot of limitations, one of them is that the styles have to be inline).
Can’t recommend it enough!
Rails, TailwindCSS and TailwindUI.
It’s running in Docker on a boring server and I’m using Gitlab for one-click deployments. Emails are being sent by Postmark. I’m using Plausible for analytics as for all my projects.
That’s already all there is to it. Follow me on Twitter if you want to follow the project progress.
]]>To change that I decided to work on a new project. It’s called Birdfeeder and it is a simple tool that collects your bookmarked tweets and sends them to you once a week. You can customize the day and time you want to receive the summary email to suit your schedule.
Once a week you’ll get an email like that with your Twitter bookmarks:
Using Birdfeeder is really simple as the browser extension automatically recognizes you if you are logged into Birdfeeder and Twitter at the same time.
To be sure everything is working correctly click on the small Birdfeeder icon in your browser and verify that everything is green.
If you decide to give it a try let me know what you think.
]]>After hearing more and more about Web Extensions and even Safari adding support recently I assumed that now is the time to take this on.
I found out that it was optimistic to think that now you’ll write one extension, compile it for the 3 platforms, upload it to the respective extension stores and you are done.
“Write once, run anywhere” was once again a lie.
Manifest 3 is the new hotness and that’s what you should use
Reality: Mozilla and Google are still fighting over what Manifest v3 is supposed to be. For now Firefox doesn’t really support Manifest 3, their extension testing website is only working with v2 (or at least is broken in general) and all the tooling to build a Safari extension is only working for Manifest v2.
The Web Extension shares the same code for all platforms.
Reality: There’s subtle differences so you’ll still end up with either platform conditions or separate code bases. Even small differences like browser.webRequest.onBeforeSendHeaders
and chrome.webRequest.onBeforeSendHeaders
behaving differently.
A simple fetch will behave the same in Chrome and Firefox.
Reality: Firefox does something differently and you’ll run into CORS issues. I think it might be related to this issue but I wasn’t able so solve it yet. Currently I just whitelisted the moz-ext://...
header in the backend which is not ideal.
In general it seems to be that the Mozilla Firefox tooling for developing extensions is more polished. They have a CLI build tool: web-ext build
and web-ext run
which runs and auto-reloads the extension. Including for Manifest changes where Chrome always needed me to remove and re-add the extension.
It is possible that some of my solutions or understandings of the issues are wrong, I’m still new to this. If you spot something off, let me know!
]]>The Apple TV currently doesn’t support running a VPN directly on the device. If it would be possible it wouldn’t be that useful as apps could then check if a VPN is running and refuse to work. The solution is to move the VPN one layer up so the Apple TV (or any other device) is just connected to a network and don’t know anything about the VPN.
There are multiple ways to achieve that, I tried some of them but decided against them after testing.
Integrated Internet Sharing on macOS
This is a feature of macOS that allows your Mac that’s connected to the internet with a wired connection to act as an access point providing WiFi to other devices. This works but I couldn’t figure out how to redirect traffic through the VPN running on the Mac. After playing around with ipfw
and L2TP I couldn’t get it to work and moved on. It sometimes worked but wasn’t stable or fast enough.
Provisioning the Apple TV with a custom Network proxy profile via the Apple Configurator
This worked but some apps didn’t work or realized they were on a proxy or VPN. I suspect the OS is telling the apps that a Proxy Profile is running. In my test case I used authenticated HTTP proxies.
Raspberry Pi
I thought about using a Raspberry Pi that’s on a wired connection, connecting it to a VPN and sharing this connection via WiFi with the Apple TV. There’s multiple ways to do that but setting up a VPN, updating and monitoring this sounded like work I didn’t want to do.
In the end I wanted the Raspberry Pi solution but in a nicely packaged version where I don’t have to play system administrator.
After more research I found GL-iNet, a Hong Kong based company building all kinds of devices like that.
I pre-ordered the new Beryl MT1300 Travel Router and after a trip around the world it arrived after a few weeks.
The setup was straight forward. Connect to power with the included USB-C power adapter and plug Ethernet cable to the WAN port of the Beryl. Wait a 1-2 minutes until it started up and then connect to the pre-configured WiFi with the GL-MT1300 SSID.
There are easy to follow printed instructions (and stickers) included that make all of this a breeze.
Once you are connected to the WiFi open the web interface available on http://192.168.8.1
and start configuring the router. Everything is mostly self-explanatory.
The full overview with hardware specs can be found on GL-iNet’s website. This is a brief overview covering the features I used so far.
Status dashboard
This is the main view where you can see which uplink is used, which WiFi SSIDs are active and how many clients are connected. In my case you can see that out of the four networks that are available only one is enabled. I currently don’t have a use case for the guest network (With captive portal) or 2.4GHz networks.
Easy one-click updates
This is the main reason why I got a ready-made solution and am not using a home-brew Raspberry Pi version where I have to keep dependencies updated, resolve issues and edit configs.
Using the hardware buttons
The router has a small hardware button where frequently used actions can be toggled without visiting the web interface. You can define which one with a single click through the interface:
Custom DNS servers
I use NextDNS on all my devices for ad-blocking and tracking prevention directly on the network level. It’s like Pi-hole without the fiddling. Setting this up for all devices connected to the Beryl is also just one click.
Note: If you haven’t used NextDNS you should give it a try. Read their Privacy Policy while you are at it, it’s very brief: NextDNS Privacy Policy
WireGuard Server
Probably the easiest way to set up a WireGuard server and you don’t even have to do iptables
gymnastics.
Tor
Now you can order from the dark net just by connecting to your own Tor WiFi. Neat, I guess?
VPN Client
This is the feature I bought the device for, you can just import your WireGuard or OpenVPN profiles and connect. Then if enabled all traffic going through the Beryl will get passed through the VPN.
So far this works very well for me. If you have any questions feel free to reach out on Twitter.
]]>Having them in a big binder makes it very hard to quickly find a document that you know you have somewhere.
The obvious solution was to look into a good setup for a paperless “office”. Before starting my research I narrowed it down to a number of features that I deemed non-negotiable:
That means that at a higher level my setup consists of three parts: A scanner, software to organize documents and a shredder to destroy the documents before throwing them out.
I spent some time reading reviews and in the end decided on buying the Fujitsu ScanSnap iX1500. Its predecessors were well regarded and while everyone agreed that the software was not particularly beautiful it did its job well enough.
Even before starting this project I played around with EagleFiler and enjoyed it. It’s from a reputable developer (and blogger) who’s apps have been around for a long time.
EagleFiler makes having a open format a feature and not just an afterthought. This, to me, is very important for a software that should keep my documents safe and accessible for a long time.
As written on the website:
EagleFiler libraries use an open format: regular files and folders that are fully accessible to your other applications.
The site has a very exhaustive help section documenting every distinct feature of the app. One relevant example: Importing from a Scanner
Right now I have the included scanner software (ScanSnap Home) set up to scan directly into a directory (~/Documents/Incoming Scans
). ScanSnap is configured to hand over its scanned documents directly to ABBYY FineReader for ScanSnap which performs OCR and stores the resulting file in the mentioned Incoming Scans
directory.
EagleFiler has a special watch directory called “To Import” located in its library directory (~/Documents/EagleFiler Library
in my case) which promptly imports files thrown in there into its internal library. Unfortunately I had issues with that as then ABBYY FineReader complained that it can only work on files directly coming from ScanSnap or failed with the following obscure error message.
I contacted the — very helpful and quick to respond — ABBYY support but they said the issue isn’t known after escalating it. My guess is that this is some kind of race condition if multiple apps try to access the same file. I resolved the issue by using Hazel to monitor the “Incoming Scans” folder and moving the file to EagleFiler’s “To Import” directory once the file was processed by ABBYY FineReader and received its _OCR.pdf
suffix.
With this glue component everything now works without a hitch and the ABBYY error message never showed up again. I was able to scan 100 documents without it showing up. In my first iteration I stored my “EagleFiler Library” directory in iCloud Drive and suspected that that’s a problem but it seems to only be related to ABBYY accessing the file at the same time. I did have some other issue with putting it in iCloud Drive but that was mostly related to ScanSnap not being able to deal with long file paths and a workaround was swiftly explained by the developer in the support forum.
Update: In the EagleFiler 1.8.14 Changelog I spotted an interesting improvement that sounds like it’ll help with that.
The To Import folder can now be replaced by an alias if you want to relocate it to another location (e.g. to work around a ScanSnap path length limitation).
Tagging & Organizing
This is one of the areas where I’m not completely satisfied with EagleFiler yet as the importing workflow seems unnecessarily unoptimized for an action that will be done very frequently.
These days I’m importing a lot of documents which don’t have metadata yet as they come — with a slight detour through the OCR app — from the scanner. They do have a filename but I want to set the “From” field (Phone Company Ltd., Insurance Company,…) and optionally tag them based on what kind of document it is (Insurance, Taxes, Invoices,…).
When a new file comes in I open its info panel via Command + I
and I am presented with the following options.
It’s easy to navigate through them with the keyboard but only the “Tags” field auto completes existing tags. That means that I have to use a clipboard manager or external tool (Thread in support forum) to make sure I use a consistent spelling of the source name. (Was it Telekom AG or Telekom GmbH again?). Also to reach the (for me) often used Tags field I have to hit tab 9 times to jump through the date and time fields.
I would love to have a kind of more “Inbox" view optimized for quick tagging similar to how the “New To-Do” modal in Things.app works. It does this exceptionally well and adding / tagging a new task is fast.
Another usability issue I found was tagging multiple entries. If I scan 10 related documents it would be helpful if I can just select all of them, open the Info panel and set the “From” field. This doesn’t seem to be possible right now as the field is disabled in this view.
Even with this small annoyance I’m personally happy with the solution as the search works very well and is extremely fast. Knowing that the developer is actively responding to threads in the support forum and continuously developing the app are more important to me. There’s also a lot of features that I haven’t used yet and will have to incorporate into my workflow at some point.
Once the documents are scanned it has to be decided if it’s worth keeping the original or not. Documents where the original needs to be kept around like insurance policies, most tax documents and salary slips go back into the binder.
I didn’t feel comfortable with throwing my bank and some tax documents in the regular paper trash so I ordered a small document shredder. In my case that’s a Leitz IQ Home Office Document Shredder. It does the job just fine, any of these will probably do.
Make sure you have proper backups, losing a bunch of files is sometimes easier than a big stack of paper documents.
]]>They are not 100% comparable as one of them is a full blown publishing platform with features like newsletters, running paid membership programs and other features for professionals. Hugo on the other hand is — while also being extremely powerful — much more focused on being customizable and fast. It also doesn’t serve the pages dynamically but pre-generates the html pages that then get served by the web server (nginx
in my case). This makes the whole operation a lot faster but with the downside of not being able to just update a post and have it show up on the page without regenerating the html files.
You might ask: “If both are so great why switch then?”
My Ghost workflow consisted of writing posts on iOS or macOS in iA Writer. Once I was ready to publish a post I directly pushed it to Ghost from within iA Writer. This works well but there are multiple problems:
There were a few things that I had to make sure were being taken care of in the process, unfortunately there’s no shortcut our out of the box solution for that.
What I definitely wanted to achieve:
To export the posts from Ghost I used their export feature which gives you a nice JSON file to work with. Then I used a tool called ghostToHugo which converts them into Markdown files with the correct file names and a Front Matter that Hugo expects.
Images are not included in the export from Ghost so you have to get them yourself from your server with scp
/ ftp
or whatever you were using before and temporarily store them in a directory somewhere. We need them for step 3.
Create a new Hugo site, customize theme, make sure it has working full RSS feeds. This doesn’t sound like a lot but that’s what took up most of my time.
This is not a step-by-step tutorial as this always depends on how your data looks like, I’m just trying to give an idea of which things I had to do and have some code snippets for inspiration.
This step was the most annoying one as I had to write a bunch of scripts to fix the exported and converted posts. After using ghostToHugo
they were in the right format but in the wrong location, images were embedded in different ways, the images were not in the directory of the post and the “featured” image of the posts was not set.
This also took up way more time than expected as I was using Hugo’s Page bundles. That means that each post would be one directory called 2020-01-01-slug-of-post
containing a index.md
file with the actual blog post and any images used in the post would just be stored in this directory too. I went with this approach over the default way of having a list of flat files and storing all your images in static/
because that becomes messy very fast.
Script 1: Fix directory structure
The first step was a script that creates these directories and index.md
files from the list of flat files exported from Ghost.
Input:
my-old-blog-post.md
Output after my script:
2015-01-01-my-old-blog-post
└── index.md
Most of these scripts are roughly the same so I just include one for reference and then some snippets, it basically just iterates over the directory of posts, extracts the data we need for the new directory structure (data, slug) from the old .md
file, creates the directory, moves the .md
file and renames it to index.md
:
func main() {
files, err := ioutil.ReadDir("/Users/philipp/export-hugo/content/post")
if err != nil {
log.Fatal(err)
}
for _, file := range files {
f, err := os.Open("/Users/philipp/export-hugo/content/post/" + file.Name())
if err != nil {
fmt.Println(err)
continue
}
scanner := bufio.NewScanner(f)
scanner.Split(bufio.ScanLines)
var date, slug string
for scanner.Scan() {
matches := reDate.FindStringSubmatch(scanner.Text())
if len(matches) == 2 {
date = matches[1]
}
matches = reSlug.FindStringSubmatch(scanner.Text())
if len(matches) == 2 {
slug = matches[1]
}
}
f.Close()
t, err := time.Parse(time.RFC3339, date)
if err != nil {
fmt.Println(err)
}
newDir := t.Format("2006-01-02") + "-" + slug
if err := os.Mkdir(newDir, 0755); err != nil {
fmt.Println(err)
}
err = os.Rename("/Users/philipp/export-hugo/content/post/"+file.Name(), newDir+"/index.md")
if err != nil {
log.Fatal(err)
}
}
}
Script 2: Extract image names, find image and move it
Current state: Posts are in the correct format and in the correct location (directory with date and slug containing index.md
file with the post body)
We now have to extract all image names from each post, find the images in our directory of images we downloaded, then move the image to the corresponding post directory.
The images are linked in different ways in Ghost, depending on which options you choose or if it’s a pure Markdown post or a mix. I had a bunch of posts that were purely in Markdown format, and a bunch of them that used <figure>
for image captions.
I used the following regular expressions to extract them from the index.md
reImageCaption = regexp.MustCompile(`figure\ssrc="(.+?)".+caption="<em>(.+?)<\/em>"`)
reImageNoCaption = regexp.MustCompile(`figure\ssrc="(.+?)".+?`)
reImagesInline = regexp.MustCompile(`!\[.*\]\((.+?)\)`)
The script in essence iterates over all posts, tries to find images with the before-mentioned regular expressions and then moves them from their old location to the new one.
func main() {
files, err := ioutil.ReadDir("/Users/philipp/Blog/blog.notmyhostna.me/content/posts")
if err != nil {
log.Fatal(err)
}
for _, file := range files {
if file.Name() == ".DS_Store" {
continue
}
f, err := os.Open("/Users/philipp/Blog/blog.notmyhostna.me/content/posts/" + file.Name())
if err != nil {
fmt.Println(err)
continue
}
postFiles, err := ioutil.ReadDir(f.Name())
if err != nil {
log.Fatal(err)
}
for _, pfl := range postFiles {
if !strings.Contains(pfl.Name(), ".md") {
continue
}
//fmt.Println("> found post file in directory", pfl.Name())
pf, err := os.Open(f.Name() + "/" + pfl.Name())
if err != nil {
fmt.Println(err)
continue
}
scanner := bufio.NewScanner(pf)
scanner.Split(bufio.ScanLines)
type imageWithCaption struct {
url string
caption string
}
var images []imageWithCaption
var rows []string
for scanner.Scan() {
t := scanner.Text()
matches := reImageCaption.FindStringSubmatch(t)
var found bool
if len(matches) == 3 {
found = true
iwc := imageWithCaption{
url: matches[1],
caption: matches[2],
}
if strings.Contains(matches[1], "/content/images") {
images = append(images, iwc)
}
rows = append(rows, fmt.Sprintf("![%s](%s)\n\n%s", filepath.Base(iwc.url), filepath.Base(iwc.url), iwc.caption))
}
if !found {
matches2 := reImageNoCaption.FindStringSubmatch(t)
if len(matches2) == 2 {
found = true
iwc := imageWithCaption{
url: matches2[1],
}
if strings.Contains(matches2[1], "/content/images") {
images = append(images, iwc)
}
rows = append(rows, fmt.Sprintf("![%s](%s)", filepath.Base(iwc.url), filepath.Base(iwc.url)))
}
}
if !found {
matches3 := reImagesInline.FindStringSubmatch(t)
if len(matches3) == 2 {
found = true
iwc := imageWithCaption{
url: matches3[1],
}
if strings.Contains(matches3[1], "/content/images") {
images = append(images, iwc)
}
}
}
if !found {
rows = append(rows, t)
}
}
if len(images) == 0 {
continue
}
f.Close()
fmt.Println("images", images)
for _, iwc := range images {
oldPath := "/Users/philipp/export-hugo-images" + iwc.url
fmt.Println("oldPath: ", oldPath)
fn := filepath.Base(iwc.url)
newPath := f.Name() + "/" + fn
fmt.Println("new path: ", newPath)
err = os.Rename(oldPath, newPath)
if err != nil {
fmt.Println("err but moving on")
}
}
}
}
}
Script 3: Set featured image of post
In the converted files there’s a key called image
in the Front Matter of each post. This contains the file name of an image that used to be the “Featured” image of a post in Ghost (the big image above a post).
I didn’t want to be forced to set an image for each post I’m publishing in the future so I just wanted Hugo to set an image if there’s a file called feature.{jpg,png}
in the post directory. To achieve that I added a condition in my template that does just that.
<div class="image">
<a href="{{.RelPermalink}}">
{{ $image := .Resources.GetMatch "feature.*" }}
{{ with $image }}
<img src="{{ .RelPermalink }}">
{{ end }}
</a>
</div>
The next step was to copy the image that was defined in the image
key of my post from the downloaded images to my post directory and rename it to feature.{jpg,png}
.
That was pretty easy as I just had to extract the image name from the post, iterate over the image files, take the matching one and rename / move it.
reImage = regexp.MustCompile(`image\s=\s"(.+)"`)
for scanner.Scan() {
matches := reImage.FindStringSubmatch(scanner.Text())
if len(matches) == 2 {
image = matches[1]
}
}
if image == "" {
continue
}
f.Close()
oldPath := "/Users/philipp/export-hugo-images/content" + image
fmt.Println("old image path: ", oldPath)
fn := filepath.Ext(filepath.Base(image))
newPath := f.Name() + "/" + "feature" + fn
fmt.Println("new image path: ", newPath)
err = os.Rename(oldPath, newPath)
if err != nil {
log.Fatal(err)
}
}
The last step was a bunch of search / replace actions in VS Code sprinkled with some regex magic to remove old file paths for images from the post bodies and to clean up unused keys in the Front Matter (author, image, unnecessary new lines,…).
It’s very important to not break URLs that are referenced elsewhere or indexed by Google so there’s already a system in place in Hugo to take care of that. First we have to look at what we are dealing with.
The Ghost URLs for a blog post were in the following format:
blog.notmyhostna.me/slug-of-post
I defined my URL structure in config.yaml
to look like this:
permalinks:
posts: ':section/:title/'
This results in URLs in this format:
blog.notmyhostna.me/posts/slug-of-posts
Hugo’s solution to the problem is called Aliases and you only have to provide alternative URLs for the given resource in the Front Matter of the post. This was easily done by duplicating the slug
key that ghostToHugo created for us and renaming it to aliases. Be aware that aliases accepts a list of URLs that’s why the format looks a bit different in YAML.
---
slug: "apple-ruined-itunes-what-now"
aliases:
- "/apple-ruined-itunes-what-now"
title: "Apple ruined iTunes — What now?"
---
Your post is now reachable from both URLs but with the correct canonical URL set in the header.
I hope this was somewhat helpful and if you have any specific questions feel free to reach out. Happy to help!
]]>While it’s true that RSS support is probably not growing these days it’s still around for a surprisingly amount of sites. Shockingly even Medium.com offers RSS feeds, you just have to construct the URL yourself. Most likely this is done so you have to use their proprietary “Follow” feature and not use an open standard.
The current arrangement I’m using consists of three parts:
My current selection of tools hasn’t changed in a long time and I’m still fairly happy with it. The main source of truth for all my feeds is Miniflux. It’s a self-hosted feed reader, written in Go that I run in Docker on my server. I’m a happy user for years already and if you don’t want the hassle of hosting it yourself you can also support development and let them do the job with the hosted version.
It’s very stable, light on resources and offers a Fever compatible API which makes it possible to use any app that supports Fever to sync with it.
How Fever became the one of the “standards” of syncing is still not clear to me. Do you know?
Alternatively there’s the low effort hosted (paid) services: Feedbin and Feedly. I haven’t tried them but they both have been around for a while now and seem reputable.
I currently use Reeder on macOS and iOS but I’m very much looking forward to switching to NewsNetWire once syncing via the Fever API is supported. It’s an open source re-issue of the classic NewsNetWire (History & Announcement) developed by the original author Brent Simmons.
Being a Safari user I also found a neat extension called “RSS Button for Safari”. It’s main feature is to add a button to the browser that detects RSS feeds offered by websites and makes it easy to subscribe with a few clicks. In case you also use Miniflux just select the option “Custom URL” in the extension’s settings window and set it to the following URL. Clicking on the “+” button in the extension will then take you directly to the “New Subscription” section of your Miniflux instance.
https://rss.example.com/bookmarklet?uri=%@
Remember when this was a standard feature of every browser and was located right in the address bar?
]]>I could still tag it with my own tools, tags were written to the files, the folder structure was organized nicely and all was well. After messing up my library once, by enabling Cloud Music Library, in the early days of Apple Music and ending up with duplicate Apple Music tracks injected in my own ripped albums I ran with a hybrid solution for now.
That hybrid solution was to have Apple Music enabled but without the Cloud Music Library. This meant I couldn’t download tracks for offline consumption and always felt a bit awkward as I actively had to jump through hoops to use both of them which obviously didn’t feel right.
Over the years I fed my library with ALAC files converted from FLAC files via XLD and meticulously maintained my library. Incoming files were tagged with Yate and Beets. Both of them are amazing tools in the music organizer’s tool box.
Until today, when I switched from macOS Mojave to Catalina and Music.app messing up my library. It wasn’t opening it any more and was asking for the “Library File”. After looking into the directory where said library was supposed to be I only saw years of different iTunes libraries, directories for Podcasts, Audio-books, iTunes libraries from 2011 and other cruft. I decided to clean up this mess and look for something new. On top of that my trusted last.fm scrobbler NepTunes stopped working.
Passively looking for iTunes alternatives for the past 10 years there was never anything that could convince me to switch. Most of them weren’t really promising or I disregarded them based on their landing page screenshots.
Over the years I downloaded and tried:
The only really promising one of these was Swinsian and I’m glad I did gave it another go. While it has some odd corners that don’t look like a proper Mac app it’s very fast and imported my 80MB iTunes library file with 900GB worth of tracks in a couple of minutes. Fetching the album art took a bit longer but that’s understandable. I was briefly worried that it’s not actively developed any more as the last Tweet on their account was from 2011 but then I saw the public changelog which was just updated a few days ago to support Apple’s new Music.app.
That playlist count is most likely a bug, but the import worked flawlessly and all my old playlists were there again.
It’s also very easy to customize everything, down to how the main window should be structured.
You can even enable an iTunes like grid view.
There’s a lot of garbage in there from not-properly tagged things in my old library or where the information wasn’t written into the files correctly it seems. Now that I have a good interface for that it’ll be easier to track these down.
I will now once again use a hybrid strategy with Apple Music via the Music App but without having a local library connected to that. Then I can use all the features that make Apple Music great from all devices without having to use a different setup on different devices.
For my “real” local library I’ll use Swinsian, based on my imported iTunes library. I don’t really need to sync my phone with iTunes as I’m just streaming there from Apple Music anyway.
Give it a try!
]]>The Apple Touch Icon specification gives you a way to specify a bunch of assets in various sizes to make a website behave more app-like on iOS. With a launch screen, an app icon or an app title. Most of these are probably from the days before the app store where the Apple ”sweet solution” for third party apps was to build web apps that behave like ”native” apps. Now the biggest use case (which is probably still very small) is to provide an icon for websites pinned to the Home Screen.
In the HTML markup of a site this can look like this:
<link rel="apple-touch-icon" href="touch-icon-iphone.png">
<link rel="apple-touch-icon" sizes="152x152" href="touch-icon-ipad.png">
<link rel="apple-touch-icon" sizes="180x180" href="touch-icon-iphone-retina.png">
<link rel="apple-touch-icon" sizes="167x167" href="touch-icon-ipad-retina.png">
A website that I use a lot doesn’t have a proper Apple Touch Icon set which bothers me every time I unlock my phone and see the Home Screen. I was about to email them but then realized that it’s probably a very small edge case and they have more important things to do.
If a website doesn’t set the Apple Touch Icon in the HTML markup iOS just creates an icon based on a screenshot of the website. This works but isn’t really elegant, especially if it’s on your main Home Screen.
I set out to find a solution to force iOS to use a custom icon and couldn’t really find anything. After searching for a bit longer I stumbled upon an old blog post from 2008 that explained exactly what I needed. The problem was that something happened with the website since then and all the links were automatically removed. Even in the comments only parts of some links were visible, the site probably went through a few tech transitions over the years. Luckily the site got captured by the amazing Internet Archive and this snapshot still works.
I created a new bookmark in Safari (⌘D), set the name to “Set touch icon” and left the description empty.
Then right click the bookmark and select “Edit Address”. In here paste the following snippet:
javascript:var%20s=document.createElement('link');s.setAttribute('rel',%20'apple-touch-icon');s.setAttribute('href',prompt('Touch%20icon%20URL?','https://'));document.getElementsByTagName('head')%5B0%5D.appendChild(s);void(s);
It’s injecting a <link>
element as a child of the <head>
in the website’s HTML markup. Then when iOS reads the website to create the bookmark it sees the new link element and follows the link for the custom Apple Touch Icon.
Now this should also show up on iOS in your bookmarks if you have bookmark sync via iCloud enabled in Safari.
Use Safari on iOS to navigate to the website you want to bookmark on your Home Screen. Once the site is loaded open your Favorites in Safari and click on your bookmarklet (Set touch icon
). It’ll ask you for a URL and you have to paste the direct URL to your (square) icon you want to use. In my case I uploaded something to Imgur and used the direct link. It’s important to use a direct link like https://i.imgur.com/12345.png
.
The linked guide explains what it’s technically doing:
This will bring up a dialogue to prompt for the URL of the icon you wish to use – so make sure your icon is online somewhere. Clicking OK will seemingly do nothing, but what’s actually happened is that the LINK element has been set and the script has finished. Just go ahead and add the site, and your new icon should be used.
I noticed that after I ran the bookmarklet and injected the <link>
element the old screenshot icon was still showing up in the “Add to Home Screen” preview for a few seconds. I had to wait a bit and then it refreshed the icon to the new custom icon we injected.
The result is a beautiful custom icon on our Home Screen.
]]>This took me a bit longer than I’d be willing to admit. Partially because the naming in Homebrew is sometimes a bit hard to follow #11091.
There are 3 simple steps involved: Install wanted version, unlink old one, link new one.
~|⇒ go version
go version go1.13 darwin/amd64
The available versions are listed in the Hombrew directory. The Example for Go would be here.
~|⇒ brew install go@1.12
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> Updated Formulae
openssl@1.1 ✔
==> Downloading https://homebrew.bintray.com/bottles/go@1.12-1.12.9.mojave.bottle.tar.gz
Already downloaded: /Users/username/Library/Caches/Homebrew/downloads/6392e5d3faa67a6132d43699cf470ecc764ba42f38cce8cdccb785c587b8bda8--go@1.12-1.12.9.mojave.bottle.tar.gz
==> Pouring go@1.12-1.12.9.mojave.bottle.tar.gz
==> Caveats
go@1.12 is keg-only, which means it was not symlinked into /usr/local,
because this is an alternate version of another formula.
If you need to have go@1.12 first in your PATH run:
echo 'export PATH="/usr/local/opt/go@1.12/bin:$PATH"' >> ~/.zshrc
==> Summary
🍺 /usr/local/Cellar/go@1.12/1.12.9: 9,819 files, 452.8MB
~|⇒ brew unlink go
Unlinking /usr/local/Cellar/go/1.13... 3 symlinks removed
As the specific formula we want (go@1.12
) is a keg-only formula it must be linked with --force
. If you try without it’ll tell you just that. As explained in the FAQ a key-only formula is one that’s only installed into our /usr/local/Cellar
directory without being linked automatically.
~|⇒ brew link --force go@1.12
Linking /usr/local/Cellar/go@1.12/1.12.9... 3 symlinks created
If you need to have this software first in your PATH instead consider running:
echo 'export PATH="/usr/local/opt/go@1.12/bin:$PATH"' >> ~/.zshrc
Now that everything is linked correctly it should show the specific version we want:
~|⇒ go version
go version go1.12.9 darwin/amd64
I hope this was helpful, if you like posts like this follow me on Twitter: @tehwey
]]>I received a report as an Excel (.xlsx
) file that I had to import into our system. First step is usually to convert it to a CSV file and feed it into our importer—a service written in Go. The Go libraries for Excel files are not very nice to say the least so CSV is the way to go. I converted it by using Microsoft Excel for Mac. After running the import I saw that it was silently discarding the first column of the imported file.
I build a quick example to replicate the problem and figure out if it’s related to the CSV library we are using: gocarina/gocsv. The neat feature of the library is that you can just unmarshal into a struct without using a loop, check for EOF
and other error cases. It behaves more like the json package of the standard library.
package main
import (
"encoding/csv"
"fmt"
"io"
"os"
"github.com/gocarina/gocsv"
)
// Row is a row with columns
type Row struct {
One string `csv:"One"`
Two string `csv:"Two"`
Three string `csv:"Three"`
}
func main() {
cf, err := os.OpenFile(os.Getenv("FILEPATH"), os.O_RDWR|os.O_CREATE, os.ModePerm)
if err != nil {
panic(err)
}
defer cf.Close()
// Use ; as separator as that's what Excel gives us
gocsv.SetCSVReader(func(in io.Reader) gocsv.CSVReader {
r := csv.NewReader(in)
r.Comma = ';'
return r
})
var rows []Row
if err := gocsv.UnmarshalFile(cf, &rows); err != nil {
fmt.Println(err)
}
for _, row := range rows {
fmt.Println("row", row)
}
}
Even with the minimal example the first column never got unmarshaled into the struct. To narrow down it’s not a problem with the library I quickly built a second example using smartystreets/scanners which also didn’t work.
Note: Now when I tried to replicate it at home I couldn’t actually replicate it and smartystreets/scanners
worked using this minimal example. I’m not sure why it didn’t work when I tried it the first time, maybe I was using an older version. For completeness sake this is the code I’m using now that works.
package main
import (
"fmt"
"log"
"os"
"github.com/smartystreets/scanners/csv"
)
// Row is a row with columns
type Row struct {
One string `csv:"One"`
Two string `csv:"Two"`
Three string `csv:"Three"`
}
func main() {
cf, err := os.OpenFile(os.Getenv("FILEPATH"), os.O_RDWR|os.O_CREATE, os.ModePerm)
if err != nil {
panic(err)
}
defer cf.Close()
scanner := csv.NewScanner(cf,
csv.Comma(';'), csv.Comment('#'), csv.ContinueOnError(true))
for scanner.Scan() {
if err := scanner.Error(); err != nil {
log.Panic(err)
} else {
fmt.Println(scanner.Record())
}
}
}
After seeing that both libraries (at least at the time of the investigation) failed on picking up the first column I realized that it’s probably a problem with the file itself.
I ran a file from a previous report through the importer and it worked. At that point it was clear that it’s a problem with the actual file and not the importer. I did what I should’ve done at the beginning and looked at the raw file with Hex Fiend which is one of my favorite tools and an incredible addition to everyone’s toolbox.
After comparing the Hex representation of the working and non-working file it was very clear what was happening.
The non-working file was prepended with three invisible characters: ...
which upon closer inspection is EFBBBF
.
If you know a thing or two about encodings you know that’s the BOM (Byte Order Mark) and that Go doesn’t like it (”We don’t like BOMs.” - bradfitz on the Go issue tracker).
Wikipedia says:
The byte order mark (BOM) is a Unicode character, U+FEFF BYTE ORDER MARK (BOM), whose appearance as a magic number at the start of a text stream can signal several things to a program reading the text:[1]
The byte order, or endianness, of the text stream; The fact that the text stream’s encoding is Unicode, to a high level of confidence; Which Unicode encoding the text stream is encoded as.
As it turns out Microsoft adds these if you save a CSV from Excel.
Microsoft compilers and interpreters, and many pieces of software on Microsoft Windows such as Notepad treat the BOM as a required magic number rather than use heuristics. These tools add a BOM when saving text as UTF-8, and cannot interpret UTF-8 unless the BOM is present or the file contains only ASCII. Google Docs also adds a BOM when converting a document to a plain text file for download.
On top of that: This behavior is different between Microsoft Excel for Mac: Version 15.12.3 and Microsoft Excel for Mac: Version 16.28. I tried to replicate the issue on Version 15 at first, exported the CSV and the BOM control characters weren’t inserted. I then upgraded and saw that they added the UTF-8 CSV option additionally to the “normal” CSV export which is now buried all the way at the bottom of the save dialog (Not even pictured in this screenshot) and the default one is UTF-8 with the additional BOM characters.
I created 3 files, two from Excel Version 16 with the UTF-8 and the “normal” export. One from the only available CSV export in Excel Version 15 where you can easily spot the difference.
Running the Go importer with gocsv
and the three example files, the first column is missing in the example using the UTF-8 version of the file.
csv|master⚡ ⇒ FILEPATH=csv_excel_15_example.csv go run csv.go
row {Foo Bar Baz}
csv|master⚡ ⇒ FILEPATH=csv_excel_16_example.csv go run csv.go
row {Foo Bar Baz}
csv|master⚡ ⇒ FILEPATH=csv_excel_16_utf8_example.csv go run csv.go
row { Bar Baz}
After removing the BOM characters and running the importer everything worked as expected:
csv|master⚡ ⇒ FILEPATH=csv_excel_15_example.csv go run csv.go
row {Foo Bar Baz}
csv|master⚡ ⇒ FILEPATH=csv_excel_16_example.csv go run csv.go
row {Foo Bar Baz}
csv|master⚡ ⇒ FILEPATH=csv_excel_16_utf8_example.csv go run csv.go
row {Foo Bar Baz}
If you can’t touch the file, do a properly encoded export or can switch to a different CSV library you could also use this dimchansky/utfbom library to remove encoding information after parsing the file.
o, err := ioutil.ReadAll(utfbom.SkipOnly(bufio.NewReader(cf)))
if err != nil {
fmt.Println(err)
return
}
It’s always the “Schei? Encoding” in the end isn’t it?
]]>Today I finally sat down to get this done before something bad happens to my data. After doing some research I had to decide between restic and Borg. Both of them looked very promising but in the end I settled on restic as it’s written in Go and not Python like Borg is.
Scheduled backups from my server hosted at OVH to my Synology NAS running in my local network at home.
Install restic
I don’t really need to explain much about how to do that as it’s all well explained in other popular guides like Jake Jarvis’ blog post “Automatically Backup a Linux VPS to a Separate Cloud Storage Service”. There’s also a very easy to follow official documentation which I mostly followed for my setup.
Just install it based on the instructions for your operating system. Run restic version
to see if you are on the latest version and then continue with the next step.
SSH configuration
You have to make sure to have a working public key authenticated SSH connection between your backup source and backup target.
In my case I had to create an SSH key on my server and then copy that one to my NAS. This can be done by using these two commands:
ssh-keygen -t ed25519 -o -a 100
sshh-copy-id -i ~/.ssh/id_ed25519.pub -p 57564 username@username.synology.me
If the SSH server on the target is running on a non-standard SSH port make sure to set up a ~/.ssh/config
file to set all these parameters as you can’t set it in the restic backup command later on. I run the backup as root so all this is done while being logged in as root. If you don’t like that there’s a section in the documentation explaining how to do just that.
The SSH config file could look like this, the ServerAliveInterval
and ServerAliveCountMax
parameters were suggested in the forum.
Host username.synology.me
HostName username.synology.me
User username
Port 12345
ServerAliveInterval 60
ServerAliveCountMax 240
If you type ssh username.synology.me
it should connect via SSH and you should be logged into your NAS without typing a password. This has to work before moving on to the next step.
This sets up the directory where the backups are going to be stored at. In my case I did the backups over SFTP to my NAS so make sure you can log into your backup target via SSH / SFTP if you use this strategy. Of course there’s also a lot of other backup targets you can use (S3, Backblaze, DigitalOcean etc). I tried to use Google Cloud Storage on my first attempt but couldn’t get it to work but you might have more patience than me.
With this command we create the backup repository on the remote host via SFTP:
restic -r sftp:username@username.synology.me:/backup-remote/notmyhostname-2019 init --verbose
After running this you can double check that by logging into your NAS and making sure the directory got created. In the directory should be a config file and a bunch of other standard directories that restic creates.
dewey@alexandria:/volume1/Archive/backup-remote/notmyhostname-2019$ ls -lah
total 4.0K
drwxrwxrwx+ 1 dewey users 66 Jul 18 21:38 .
drwxrwxrwx+ 1 dewey users 98 Jul 18 20:53 ..
-rw------- 1 dewey users 155 Jul 18 21:31 config
drwx------ 1 dewey users 1.0K Jul 18 21:29 data
drwx------ 1 dewey users 256 Jul 18 22:18 index
drwx------ 1 dewey users 128 Jul 18 21:31 keys
drwx------ 1 dewey users 0 Jul 18 22:18 locks
drwx------ 1 dewey users 384 Jul 18 22:18 snapshots
Warning for Synology usersIf you look closely you see that there’s a mismatch between the path on the NAS from the ls -lah
output and where the restic command is supposed to create the repository:
/backup-remote/notmyhostname-2019
vs. /volume1/Archive/backup-remote/notmyhostname-2019
. You’d think that the repository command would need to look like:
/volume1/Archive/backup-remote/notmyhostname-2019 init --verbose
That’s what I thought too but as I later found out and confirmed via the official restic forum (which was very helpful and active) this is a “feature” of Synology where the root directory of a SFTP user is actually the user’s home directory. So what would be /
for an SSH user is actually /volume1/Archive/
for a SFTP user. I also answered that question in my thread on the forum for other Synology users.
We need to define which files we need to include in or exclude from the backup. I prefer the options of providing both, an inclusion an and an exclusion file. This makes it very explicit which files are supposed to be in the backup.
I created two files for that purpose:
root@notmyhostname:~/.config/restic# cat includes
/etc
/home/ubuntu
root@notmyhostname:~/.config/restic# cat excludes
/home/ubuntu/services/**/deluge-data
/home/ubuntu/.cache
How the rules work is all defined in the documentation.
Every time you run a restic command you have to provide the repository path and the password via an environment variable or command line flag. To make this a bit less annoying I’d suggest you create a file like backup.sh
and export the variables there before running the actual command.
After creating the file with the following content just make it executable with chmod +x backup.sh
and run it with ./backup.sh
.
#!/bin/bash
export RESTIC_REPOSITORY="sftp:username@username.synology.me:/backup-remote/notmyhostname-2019"
export RESTIC_PASSWORD="changeme"
restic backup --verbose --files-from /root/.config/restic/includes --exclude-file=/root/.config/restic/excludes
This will create your first backup and if everything is working the only thing you have to do is to run it via a cronjob.
Add this to your /etc/crontab
file and your backup will run at the given interval. I’m logging errors to a file but you can also use a script to send an email, push notification or whatever you prefer.
0 */12 * * * root /root/backup.sh 2>> /var/log/restic.log
As we don’t want to keep the entire backup history we can clean up old backups after a while. The easiest way is to add the forget command at the end of your backup.sh
:
restic forget --verbose --prune --keep-hourly 6 --keep-daily 7 --keep-weekly 4 --keep-monthly 12
This should be all that’s needed. To make sure it works perform a restore of some example files by following this step.
By running the restic snapshots
command you can double check the backup history, as we see here the backups are created every 12h just like we defined it in the cron job.
root@notmyhostname:~# RESTIC_REPOSITORY="sftp:username@username.synology.me:/backup-remote/notmyhostname-2019" RESTIC_PASSWORD="changeme" restic snapshots
repository 070c204c opened successfully, password is correct
ID Time Host Tags Paths
----------------------------------------------------------------------
88eb6cda 2019-07-18 19:36:01 notmyhostname /etc
1e0dda19 2019-07-18 20:17:47 notmyhostname /etc
/home/ubuntu
988c8b69 2019-07-18 21:37:22 notmyhostname /etc
/home/ubuntu
c713b4ad 2019-07-19 00:00:02 notmyhostname /etc
/home/ubuntu
651853b0 2019-07-19 12:00:01 notmyhostname /etc
/home/ubuntu
56571ab6 2019-07-20 00:00:01 notmyhostname /etc
/home/ubuntu
5a4b7500 2019-07-20 12:00:01 notmyhostname /etc
/home/ubuntu
----------------------------------------------------------------------
7 snapshots
root@notmyhostname:~#
If you have any questions or suggestions, let me know via Twitter.
]]>At the end of this post there’s a “Recommended reading music” for some background tunes.
Thanks to Manu for his encouraging words that motivated me to write this up!
This time with Singapore Air after going with Lufthansa (via BKK) last time, and Qatar Airways (via DIA) before that. The itinerary was not bookable through the web but with the help of a family member working in a travel agency I was able to book the following routing:
From | To | Aircraft | Duration |
---|---|---|---|
TXL | MUC | A321-200 | 1h |
MUC | SIN | A350-900 | 11h50m |
SIN | KUL | B737-800 | 1h |
Return flight was the same routing but operated by Lufthansa:
From | To | Aircraft | Duration |
---|---|---|---|
KUL | SIN | A330-300 | 1h |
SIN | MUC | A350-900 | 13h |
MUC | TXL | A319-200 | 1h |
Full details on my FlightRadar24 account.
In the end I wasn’t able to make use of my layover in Singapore because I only had one hour to spend there.
Originally I was excited about seeing the Jewel at Changi Airport but I will have to do that some other time.
The flight was pretty enjoyable and the crew on Singapore Airlines was very welcoming. There were lots of food options, ice cream and even in Economy everyone received a Singapore branded amenity bag with a toothbrush, toothpaste and socks which is very different from other carriers. They didn’t get to be #1 this year (Qatar passed them) but they are a close second place in Skytrax’s “World’s Top 100 Airlines 2019” ranking for a reason.
In hindsight I would probably try not to have two layovers again as the connection was a bit tight. If possible I’d also try to get on the A380 again like last time. The problem with booking it through a third party was that I wasn’t able to properly assign the seats in the Singapore Airlines app, I got an aisle seat—essential on a long flight like that—but I would’ve preferred a bulkhead seat. I wrote more about my other preferences in this post.
After being in Kuala Lumpur a couple of times already it was easy to get around, take the train to the other terminal and finally see Trix again after what felt like a very long time.
It was great to be back and we only had one week until we had to leave for our trip to China already. There are two things about KL that surprise me every time I’m back even though I expect them already by now:
So I always end up wearing long pants, wearing a light sweater on my way to the office and having a second thicker sweater in the office locker to get through the day. I started to turn off the AC units that were blowing directly on me in the office and as it turns out the other people in the office also appreciated that as they were freezing too.
It’s easier to get used to the heat and it doesn’t really bother me any more.
After evaluating a bunch of co-working spaces in Kuala Lumpur (KL) last time I settled on Colony, close to KLCC, which I also went back to this time. This time I decided to get a reserved desk which meant I could access the office 24/7, had a desk that was mine and a locker to leave my things. This was way better than last time where I only had a pre-paid pass and had to leave when the co-working space closed (6pm) or on public holidays. This time I could just come and go whenever I wanted which was great as I was working my usual German working hours of 10am — 7pm. I’ll definitely do that again next time.
Biggest selling point of this office over the others is that it’s very close to KLCC and after crossing the street you can already enter the pedestrian tunnel (with AC!) which ends in the basement of the Petronas Towers with the LRT station.
LRT to the office usually took me 10 minutes of walking through the tunnel, with a short break at a grocery store to buy my breakfast chocolate bread.
The office also has a gym and a great rooftop pool with a great view of the towers. I didn’t take them up on the offer for now but I’ll go there next time as you need a gym membership that I didn’t sign up for this time and realized when it was already too late.
Data plans for phones are very cheap in Malaysia, especially if you are used to the highway robbery that’s practiced by the German phone providers. I signed up for MAXIS’s pre-paid plan HotLink.
While providers start to undermine net neutrality with offers like StreamOn (Telekom.de) it’s already a lost cause in Malaysia and having your data volume sliced up into different companies is the standard. Need more traffic for Instagram, WhatsApp, Facebook? Book the social plan. Want to have traffic that doesn’t count between 1am and 7am? Get yourself the night owl plan. There’s something for everybody and it’s very hard to keep track of or even understand what’s free and what’s not. At some point I had 5 different kinds of traffic I could use up. This is the case across multiple providers I checked before settling on one.
State of net neutrality in Malaysia pic.twitter.com/w0Tj3dCRGU
— dewey (@tehwey) May 10, 2019
On the other side residential internet via fast and cheap Fiber is readily available. I have yet to see an ad for residential 1Gbit internet connections in Germany.
Getting around is very easy if you are close to the LRT stations, for the rest there’s Grab.
Our morning commute was just a few stations with the train which is running on an elevated track and therefore has a rather scenic view.
Food, so much good food. There’s probably no country that I’ve been to that has a bigger variety of amazing food than Malaysia. Excellent Chinese, Indian and Malay restaurants are everywhere. The Late night Mamak or Kopi tiam is easy to find and have become my favorite places to go back to.
Sebastian mentioned that this post didn’t really have an ending, so I tried to add one. Thanks for the feedback!
After the four very eventful weeks (with a short trip to China in between) passed quickly it was time to head back to Berlin again and leave the equator behind me. I wish I could’ve stayed longer as I was just getting used to my new commute and meeting up for lunch every day. Adjusted to the climate, public transport and the food, but, alas, the office was waiting for me in Berlin.
The flight was pretty decent and rather uneventful, with Lufthansa as mentioned in the beginning. When I finally arrived in Munich I had 3 hours until my connection flight to Berlin so I decided to take the airport up on its great food choices. I found a good spot in the Käfer restaurant and had a proper bavarian breakfast with Weißwurst, sweet mustard and Breze which I enjoyed until it was time to head to the gate. I arrived in Berlin at 11am on Saturday which meant I had still some weekend left to adjust and rest.
That was my first trip report so let me know if you have any questions or improvements! Danke you!
There are some ongoing shows that I am following for years already and haven’t worked my way through the full backlog yet, but usually I try to listen to everything I started.
I still keep ended shows around in case there’s a follow up episode or the host is announcing a new related show. These are marked with “(ended)” here. The dots from 1 to 3 visualize how often I listen to them. Three dots (∙∙∙) mean I listen to all episodes, 2 all of them since I started but not the backlog, 1 from time to time.
I’m a huge fan of the podcasts that do a deep dive on very specific topics like Slow Burn (Watergate scandal), Brady Heywood’s Apollo 13 episodes, Caliphate (ISIS), Containers (Container shipping, the ones on the sea, not on your computer) and Welcome to Macintosh (History of the Apple Macintosh) so try these if you are interested in one of these topics. Revisionist History is also a great on-going show that’s best described in their own words:
Revisionist History will go back and reinterpret something from the past: an event, a person, an idea. Something overlooked. Something misunderstood
The host, Malcolm Gladwell, does an excellent job in explaining a large variety of topics in depth and has an amazing lineup of guests to help him dig deeper into specific ideas.
If you watched the HBO show Chernobyl and were sometimes wondering what was real and what was dramatized for TV the accompanying podcast gives you all the answers directly from the show’s creator.
Listen to Ear Hustle if you ever wondered how life in an American prison is like. It has likeable hosts both from inside and outside the prison. It’s one of my all-time favorites and I learned a lot about something I didn’t know much about before.
Give On Margins a try if you like to know more about very interesting people doing various book projects. The first episode is already a very good one. SW945 is also produced by Craig Mod and is a podcast mostly consisting of field recordings from Japan during his long hike. Strong recommendation.
If you like crime shows, the three listed ones are all great in their own way and mostly deal with unsolved mysteries. The Teacher’s Pet even has some current developments going on so there might be another episode coming some time. I probably don’t have to say anything about Serial as it’s probably the most famous podcast that put them on the map.
The tech, aviation and Formula 1 podcasts are the ones I currently listen to as they are all on-going and a bit more niche than the other ones. If you are interested in these topics you probably already have your favorite show in that area. I can highly recommend Shift+F1 though and even if you don’t know anything about F1 you can start with their “What’s this Formula 1 thing anyway?” season primer which explains everything for a complete beginner.
]]>Noise cancelling headphones (Non negotiable)
Even if you don’t listen to music and just wear them with the noise cancelling mode activated these are life savers and if you did it once you won’t be able to go back or even understand how people can fly without a pair of these. I usually listen to podcasts or random noises through apps like Rain Rain.
My recommendation: Bose QuietComfort 35
One of my favorite recent discoveries was Craig Mod’s new podcast called SW945 which consists of daily field recordings from his walk in Japan. You can read more about it in “The Glorious, Almost-Disconnected Boredom of My Walk in Japan” published on Wired. I listend to all episodes on my flight to Kuala Lumpur and was enjoying it so much that I immediately sent an email to him sharing the experience after landing.
My inner fanboy got a bit excited when I saw that this somehow made it into the Wired piece:
My hope was that others could “listen along” to the walk. Someone emailed and said that, on a recent long-haul flight, they had put in noise-canceling headphones, covered their head with a blanket, and listened to the walk for five hours. This made me unreasonably happy.
This describes my way of traveling pretty accurately, I try to sleep as much as I can, listen to relaxing background noises when I’m not sleeping and use the provided blanket to block out all the distractions as good as possible.
Food & Drinks
Nothing with caffeine as my feet get nervous if I drink too much of that and then have to sit for a long time. Lots of water, sometimes beer to help me sleep. Be very selective with the food that gets served, as everything will be thrown away anyway eat the part that is easily digestible and leave the rest. Bring cereal bars and proper sandwiches instead (Except on AirAsia, food is excellent). Don’t bring food that smells and bothers people.
Bring your own empty bottle and ask them to fill it up with water when they do the first service. That way you don’t have to pay attention for them to come by, have to find a place for the plastic cups and wait for the next cycle of them picking up the trash again. Makes everything easier and you can drink more while not producing more unnecessary trash. The air in planes is very dry.
Seat
I was always a big fan of window seats but if I can’t get a bulkhead seat or one where I can easily leave I’ll stick with the aisle seats now if the flight is longer than 3 hours. That way you can get up as often as you want, walk around, get more water if you need to and use the bathroom. Do your seat research with SeatGuru and pay for a good seat. It’s usually well worth it and you don’t have to worry about being late for online check-in.
Seat pockets
Don’t use them and especially don’t put important things like boarding passes, phones, passports in there. It’s a recipe for disaster apart from them being one of the filthiest spots on a plane. Also don’t use the bathroom without shoes, it’s probably more disgusting than you think and planes are not cleaned as often as you might think.
(Don’t) Recline your seat
Nothing gets my blood boiling more than people who recline all the way in Economy class. Once everyone is sleeping that’s probably fine, any other time not so much. There’s a very good article on the subject from the Points Guy. If you want a flat bed you should probably pay a bit more.
Board early
While it may seem very cool to sit around at the gate until the very last minute it’s actually just making life harder for yourself. The overhead bins for cabin luggage are usually filled up pretty quickly and people coming in last have to find some spot for luggage which may not be close to their actual row or worse, a few rows behind them. If you have a tight connection and then have to go against the stream of disembarking passengers with huge suitcase to get your luggage first you are going to have a bad time.
Travel light, if possible
Traveling with just your cabin sized luggage will make your life so much easier and less stressful. Queue to check-in your luggage? Nope. Stare at the slowly moving luggage belt waiting for it to spit our your luggage? Nope. Worrying about what you are going to wear if it doesn’t spit it out? Nope. On top of that you are also not the person that needs 4 seats on the airport train for their luggage. Wins all around. Roll your clothes tightly, don’t bring things you can easily buy at the destination (Toothpaste and other things that you’d have to separate out at the airport security).
]]>Excited about this fact I followed the short official guide and set everything up. Unfortunately I ran into a problem which took me a bit to resolve: Every time I linked images from my blog post the push to Ghost would fail with a very generic error message:
Failed to upload some images. Received an error when connecting to the service
I figured that it was maybe related to me self-hosting the blog. It’s currently running behind nginx so my guess was that something is going wrong with the proxying, caching or other annoying to debug settings.
After testing with a few different sized images I realized it only happens on images that are bigger than 1MB which pointed me to the file upload limit of nginx. After changing it everything worked and so the fix was to set it a bit higher by tweaking the limit via client_max_body_size
and reloading nginx.
Example from my configuration
server {
listen 443;
server_name blog.notmyhostna.me;
...
client_max_body_size 50M;
...
}
After doing that I also found that it’s something that’s being mentioned in the Ghost Docs here.
Hope that’s helpful to someone else finding that through a search engine.
]]>Recently, mostly after being annoyed by algorithmically sorted news feeds everywhere, I decided to get back in the game.
As a huge fan of RSS feeds I tested my way through a bunch of RSS readers over the years. The ones I used most are NewsNetWire (Mac and iOS), Reeder (Mac and iOS), Newsbeuter (Terminal based), Miniflux (Web) and recently also ReadKit (Mac).
First step of rebooting my RSS setup was to set up my favorite self-hosted RSS reader Miniflux. It deals with periodically fetching all your feeds and providing an API based on the Fever “standard”. That API format is supported by a lot of apps across platforms. This makes it very easy to use your favorite app and keep it in sync with your Miniflux instance. Miniflux itself doesn’t provide any mobile or native apps and only lives in your browser. The web interface is fast, easy to use and the author is rightly opinionated on keeping it that way.
Preferably I like to have a native app that I can maximize without being stuck in a browser window surrounded by distractions but that’s where the problems start:
There are not that many great feed readers for macOS right now that work with the Fever API and have the Look & Feel of a native Mac app and I’m hoping that I missed one that matches my criteria:
What mostly sparked my blog post was that the two apps which looked most promising seem to have been abandoned or buggy to the degree that they are unusable. There’s a big selection of very polished and feature-rich apps for iOS but the counterpart for macOS seems to be missing. If you are only interested in iOS apps for iPhone and iPad and don’t need an app for macOS there’s a good selection of reviews and articles over at MacStories.net (Fiery Feeds, The RSS revival, lire)
I bought ReadKit for Mac a while ago as it supports Pinboard and the Fever API and would collect all my unread items in one neat and native app. Unfortunately it crashes a lot when I refresh my feeds and it’s too frequently to keep using it.
Hey, I didn’t get any reply when I sent it to the support email address so I wanted to ask if there’s anything else I can do to help you track down the bug? Is ReadKit still being actively maintained? I attached two more crash reports in case that’s helpful.
I reported the bug and sent bug reports multiple times but never got a reply via Mail and Twitter.
@ReadKit Hey, is the app still being developed actively? Sent some bug report via email 2 weeks ago but haven't heard anything back yet.
— dewey (@tehwey) June 11, 2018
Last update on the App Store: January 12, 2018Last update on Twitter: February 20, 2018
I bought Reeder for Mac a long time ago when the first version came out. It looked polished back then and it still does to this day, supports the Fever API and even has a iOS app with the same familiar theme. Some time after I bought it a new version came out and I had to buy the new version. I was not too happy about it as the previous version was without updates for a long time but I understand the limitations of the App Store and the missing upgrade pricing option. I went ahead and updated to the new version. Unfortunately just like the previous versions that I bought it seems that it’s undergoing long stretches of abandonment where it’s not sure if the developer is still interested in updating it.
I wrote a quick Mail but didn’t get a reply.
Subject: Should I buy?Hey, I was using an older version of Reeder and then stopped using it after the first paid upgrade. I’m just looking into RSS readers again and Reeder still looks like my top choice. ReadKit is too buggy…I just wanted to ask if the app is still actively maintained or if there’s a new version coming out soon? Don’t want to get bitten again by a paid upgrade shortly after I bought it. Last update in the app store was 2017 and the Twitter account was mostly tweeting in 2015 so I’m a bit unsure.
Last update on the App Store: November 22, 2017Last update on Twitter: November 25, 2015
I know I’m not entitled to a reply from developers, especially the ones where I’m not a current customer but some sign of activity on Twitter, the company blog or via email would go a long way of convincing people that they should spend their money on these apps.
Leaf, looks nice but doesn’t support the Fever API
NewsNetWire, when I started using RSS feeds this was the gold standard and I used it heavily for years. Now it got acquired and after seeing the landing page I got excited. It looks great!
Unfortunately it seems that the app is extremely buggy and for that quite expensive.
Newsbeuter, a CLI based feed reader. Similar to mutt for email. I used this for a while and described my setup in another blog post but it’s not something I still want to use.
Based on this disappointing selection I’m currently just relying on the Miniflux web interface and Reeder on iOS that I bought a while ago. I hope it just keeps working.
If there are any apps I missed that check off my requirements please let me know. You can find me on Twitter as @tehwey, the discussion is at Hackernews.
]]>It would sound very cliché to say that I was interested in Hong Kong since I watched early Jacky Chan movies but there’s probably a bit of truth to that. The city with it’s neon lights, red taxis, twisted roads and dark alleys always fascinated me. A construction site featuring bamboo scaffolding and the ubiquitous green fabric wrapped around it has been my background image for a long time now, perfectly captured by Peter Steinhauer’s “Cocoon” series.
This year I finally had the chance to visit the city for two weeks and planned out an itinerary. I set out to evaluate if I could potentially live there so the goal was to see not just the city with it’s various neighbourhoods itself but also the surrounding nature and beaches.
I choose end of march to beginning of June as my time frame which later turned out to be a good choice. It was hot but never too hot and most importantly not too humid. I expected differently but only once was there any rain.
Usually I try not to plan too much ahead as I’m the kind of traveler who walks around trying to get lost and not checking off the top list of tourist destinations with hordes of Lonely Planet carrying tourists. This time I decided to plan ahead a bit more and created a list of experiences I wanted to have.
Mostly it was food places I found, got recommended or saw on Anthony Bourdain’s shows which I’m a huge fan of and partly it was the classic tourist destinations like Victoria Park, Happy Valley’s horse racing track, Dragon’s Back hike, Star Ferry and others.
During our stay we were living at Chunking Mansion which was both affordable and close to the MTR.
While many locals avoid it for its reputation and for some of its less legitimate business enterprises, it offers cheap rooms and asylum for people, specifically refugees, from all around the world…and even a taste of home.
Another fitting quote from Bourdain.
A side street located within 5 minutes of walking from Chunking Mansion
I’m not ashamed to admit that I make a habit out of visiting the local Apple Stores everywhere I go if time permits. They are usually located in nice neighbourhoods and architecturally interesting. To me that seems like a good enough reason.
Or also called CFC, usually located in buildings that could be mistaken for a parking garage or a deserted warehouse. Conveniently located next to bigger MTR stations the food centers are a collection point for street food vendors. To free up the precious space on the street and to keep an eye on the food quality it was decided to move them from the open streets to multi story buildings also housing the wet markets. The ground floor of the CFC usually houses a wet market where you can buy fresh fish, vegetables and all kinds of different spices.
Up the gray, slow moving escalator and you’ll find yourself in a huge floor filled with different kinds of restaurants. These places don’t look fancy; green plastic chairs and big round tables are lining the corridor that takes you by all the different restaurants. It’s hard to even distinguish which tables belong to which place, the chair color can sometimes be an indicator. Walking down the corridor is an experience itself, tables filled with all kinds of food, fresh fish swimming their last rounds in the aquariums next to the kitchen. The noise of plates being thrown into barrels to wash them later mixes with the blaring TVs that are trying to get your attention. The monotonous humming of the numerous fans on the ceiling fades in the background.
Or as Anthony Bourdain on the “Hong Kong” episode of Parts Unknown sums it up nicely:
Cheap delicious food served from open-air stalls. Pull up a plastic stool. Crack a beer. Fire up the wok.
Plastic tables, toilet paper rolls as napkins - the focus is on the food, not the interior.
Tsui Wah, a chain diner-like restaurant with great comfort food and breakfast options. If you are like me and you don’t eat a lot of meat for breakfast there’s sugar buns. Bread, toasted and drenched in condensed milk and sugar, paired with a thick rimmed cup of milk tea it’s a good way to start your day.
Having multiple beaches in close vicinity is one of the cool features of Hong Kong. They are clean, easily accessible by public transport or taxis and have bathroom and changing room facilities. The concrete structures on the beach paired with the pastel colors of the late afternoon sky always had a very Japanese vibe or - haven’t been to Japan yet - what I think it would look like.
Everyone knows them, the gray spikes pushing through the green surroundings of Hong Kong like beehives.
Unfortunately on the decline I was still able to enjoy some last remains before they are all replaced by modern LED alternatives.
The police station of Rush Hour 2 fame is now closed down but you can still visit it as all the signs are still there.
Too busy with Instagrammers these days, that’s the one thing that makes it interesting now if you enjoy people watching.
I want to highlight two albums that I have been listening too a lot in the past months.
Leslie Cheung’s album 常在心頭 with my favorite song: 癡心 and 一片痴 (YouTube)
Anita Mui’s album 妖女 with my favorite song: 邁向新一天 (YouTube)
Overall it was a great trip and I can only recommend the city, if you are going make sure to plan some down days to go to the beach or do a hike.
If this sparked your interest I’d suggest to watch Anthony Bourdain’s Hong Kong episode as it was one of his best. Featuring an array of interesting people like Christopher Doyle who was the cinematographer of “In the mood for love”. Sadly Bourdain passed away since I started writing this post.
]]>1Password
AppCleaner
Arq
Atom
BetterSnapTool
Calibre
Captured
Carbon Copy Cloner
Charles
Docker
Flux
iTerm
MacDown
Paw
Postico
Pixelmator
Textual
Tweetbot
Viscosity
Tower
Transmission
Dash
XLD
LICEcap
TripMode
NepTunes
GPGTools
Kaleidoscope
Yate
The Unarchiver
I’m already using Twitter and AIM through Bitlbee so checking if there’s support for Hangouts was the first thing I did. Bitlbee is an amazing piece of software enabling you to use a lot of different chat services through a unified IRC interface. Unfortunately Hangouts is a propritary protocol now and you can’t just use Jabber like it was the case with GTalk back in the days. There’s a new library now that reverse engineered the protocol and enables developers to still build on the protocol: hangups
. There’s a whole list of other projects using that library now.
One of the project that got inspired by hangups is purple-hangouts
which is an additional plugin for the already existing libpurple
library. libpurple is the IM library behind Pidgin.
Install purple-hangouts
The overlay we need to install purple-hangouts
is already in the official overlays so we can just run layman -L | grep mrueg
to see if it’s still there and then add the repository to our overlays with layman -a mrueg
. Then just run eix-sync
to update your eix
index and install it with emerge -av purple-hangouts
.
Enable libpurple support in Bitlbee
To use the
purple-hangouts
is just some additional plugins for the already existing libpurple
library so we have to make sure bitlbee
is already compiled with these enabled. To do that just set the use flags of the bitlbee package to include purple
and then emerge the package again.
root@notmyhostname /etc/portage$ cat package.use | grep bitlbee
net-im/bitlbee otr twitter purple
Configure Bitlbee
After restarting Bitlbee (/etc/init.d/bitlbee restart
) you should be able to connect to your server again. Type help purple
in your &bitlbee
channel and you should see hangouts
in the list of available services.
[14:21:25] dewey help purple
[14:21:30] root BitlBee libpurple module supports the following IM protocols:
[14:21:30] root
[14:21:30] root * aim (AIM)
[14:21:30] root * hangouts (Hangouts)
...
To add the account to your bitlbee configuration just run add hangouts example@gmail.com
with your Google Account email. To set the password you’ll have to use the /OPER
command which should look like that:
/OPER hangouts <your google password>
[14:23:15] dewey account add hangouts example@gmail.com
[14:23:15] root Account successfully added with tag hangouts
[14:23:15] root You can now use the /OPER command to enter the password
[14:24:00] dewey account list
[14:24:00] root 0 (twitter): twitter, tehwey (connected)
[14:24:00] root 2 (hangouts): hangouts, example@gmail.com
[14:24:00] root End of account list
[14:24:43] dewey account on hangouts
Once that is done we are ready to just turn it on which is done by running account hangouts on
which will in turn trigger the authentication process with Google. This will open a new IRC query that prompts you to click an URL and paste an oAuth token. This URL is currently broken so to finalize the login process we’ll have to get an oAuth token from Google through another way.
Google is trying to prevent this so there are some additional hoops we have to jump through. A workaround is currently being discussed on the issue tracker.
Just follow the instructions in this comment and grab the oAuth token, then just paste it in the IRC query that prompted you for the token and hit return. Now we are ready to finally login with accounts hangouts on
:
[14:34:29] dewey account hangouts on
[14:34:35] root hangouts - Logging in: Authenticating
[14:34:35] root hangouts - Logging in: Logged in
Done, that was easy!
Show real names instead of IDs
If you prefer not to memorize long IDs I’d recommend to enable full names for the hangouts plugin. This will just show the real name people set on their Hangouts profile instead. To do that just run the following commands:
account hangouts set nick_format %full_name
accounts hangouts off
accounts hangouts on
That’s it!
]]>I’m just documenting this here so I can be lazy and just go back to this post in the future. As usual I’m using Gentoo and nginx here but this should work for almost every other configuration. This is the minimal nginx config I usually use that scores A+ on SSLLabs and works well for all my needs.
To get the certificates I’m using the official tool that is now under the umbrella of the EFF: certbot
Install this tool on your system, stop your currently running web server to free up the port and then just run the tool:
$ certbot certonly
If you run it for the first time it’ll ask you to accept some terms and to enter your email address. After that you’ll see this screen:
Selection option 2 here and continue, on the next screen just enter the domain you want to get your certificate for and press OK
.
After that the certificate and private key will be generated and are located in /etc/letsencrypt/live/example.com/
. I usually use this path directly in nginx so I don’t have to copy around certificates once I renew them.
The nginx config is really basic and just looks like this. I split off the ssl.conf
because it’s the same for every domain’s config and I didn’t want to duplicate all that. That’s why it’s just imported and that way I don’t have to update all configs if I update the cipherlist in the future.
server {
listen 443 default_server ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/nginx/ssl.conf;
error_log /var/log/nginx/example.com.error.ssl.log;
access_log /var/log/nginx/example.com.access.ssl.log;
root /var/www/example.com/;
index index.html;
location / {
alias /var/www/example.com/example.com/;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
The ssl.conf
looks like that right now, make sure you generated the dhparam.pem file with openssl and it’s located in that directory.
ssl on;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /etc/nginx/ssl/example.com/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
The Let’s Encrypt certificates expire every 90 days right now (Reason). For that reason we have to set up a script to check if they are still valid every once in a while and if not, renew them. Luckily this is all really easy and we just have to add a new cronjob. For the tool to run we have to stop the web server again so it can use the included webserver to set up the endpoint to talk to the Let’s Encrypt API. We can just do that with the --pre-hook
and --post-hook
parameters.
@weekly /usr/bin/certbot renew --standalone --pre-hook "/etc/init.d/nginx stop" --post-hook "/etc/init.d/nginx st art" --quiet
This also makes it very easy to just run shell scripts to do some more things in case of a renewal. For ZNC for example you can use a tiny script like that to update the certificates the web interface of the bouncer is using.
root@examplecom ~$ cat znc-ssl-update.sh
#!/bin/bash
cat /etc/letsencrypt/live/example.com/{privkey,cert,chain}.pem > /home/dewey/.znc/znc.pem
chown dewey:dewey /home/dewey/.znc/znc.pem
Once this is all done just restart nginx and enjoy your free certificates.
If you want to support this organization please consider donating to the EFF: https://supporters.eff.org/donate/
]]>duplicity
was broken for a long time now. Luckily someone wrote a new backend that is now able to make use of the new OAuth API requirement enforced by Google.
To make use of that new backend we’ll have to run the latest version of duplicity
and duply
. Both of them aren’t in portage yet so in an attempt to fix that we are going to use our own local overlay that will be located in /usr/local/portage
. This is not to be confused with the regular location of portage: /etc/portage
.
Step 1: Create the necessary directories and set the appropriate permissions:
mkdir -p /usr/local/portage/{metadata,profiles}
echo 'LocalOverlay' > /usr/local/portage/profiles/repo_name
echo 'masters = gentoo' > /usr/local/portage/metadata/layout.conf
chown -R portage:portage /usr/local/portage
Step 2: Create the make.conf for local ebuilds
Edit or create your local.conf
located in /etc/portage/repos.conf/local.conf
and add the new overlay:
[LocalOverlay]
location = /usr/local/portage
masters = gentoo
auto-sync = no
Step 3: Add the ebuild to your local overlay
Because the version we are looking for is not located in portage right now we’ll have to use our own ebuild. We can just grab the one someone posted on Gentoo’s Bugtracker (Thanks!):
wget -O duplicity-0.7.03.ebuild https://bugs.gentoo.org/attachment.cgi\?id\=404618
This one’s a few weeks old already and the duplicity
maintainers already released a new version so just rename it to duplicity-0.7.04.ebuild
, which is the latest development version right now.
mkdir -p /usr/local/portage/app-backup/duplicity
mkdir /usr/local/portage/app-backup/duplicity/files
cp duplicity-0.7.04.ebuild /usr/local/portage/app-backup/duplicity
chown -R portage:portage /usr/local/portage
pushd /usr/local/portage/app-backup/duplicity
repoman manifest
popd
This ebuild also needs a patch (duplicity-0.6.24-skip-test.patch
) to work. This patch skips some failing tests and was patched into an earlier version. We need to provide that file so emerge will be able to apply it (The ebuild is looking for that file at the wrong place).
You should be able to find that file in /usr/portage/app-backup/duplicity/files
Just copy it into our local files
directory
cp duplicity-0.6.24-skip-test.patch /usr/local/portage/app-backup/duplicity/files
and run ebuild duplicity-0.7.04.ebuild manifest
to generate a new manifest with the new patch file included.
Now you should be able to emerge the latest version:
root@notmyhostname /usr/local/portage/app-backup/duplicity$ emerge -av1 app-backup/duplicity
These are the packages that would be merged, in order:
Calculating dependencies... done!
[ebuild N ~] app-backup/duplicity-0.7.04::LocalOverlay USE="-s3 {-test}" PYTHON_TARGETS="python2_7" 0 KiB
Total: 1 package (1 new), Size of downloads: 0 KiB
Would you like to merge these packages? [Yes/No]
If duplicity
isn’t in your package.accepted_keywords
file yet and portage is trying to emerge an old version just edit /etc/portage/package.accepted_keywords
and add the following keywords:
app-backup/duplicity ~amd64
app-backup/duply ~amd64
Now that we also added duply
we can also install the latest version of duply
via emerge (Version 1.10).
Step 4: Install PyDrive
Right now there’s no version of PyDrive listed in the portage tree so we’ll have to install it via pip
.
Run pip install PyDrive
to install the latest version.
Step 5: Create Google API credentials
The whole process is explained on the duplicity
man page (man duplicity
) but either Google changed their interface or the man page isn’t very detailed because the process is a little bit different now.
First log into your Google Account and access:
https://console.developers.google.com
Click on “Create Project” and wait for Google to process/create the Project. Once it’s done click on the project and then on “APIs & auth” in the sidebar. You’ll see a bunch of different APIs listed there but we only need “Drive API” which is located in the section called “Google Apps APIs”. Make sure to click on “Enable API” on the top.
Once it’s enabled navigate to “Credentials” also located in the sidebar and click on “Add Credentials”. Select “OAuth Client ID”, then “Other” and then “Create”. In the next step you’ll be able to obtain your “Client ID” and “Client Secret”. We are going to make use of these credentials later.
Step 6: Make duplicity use the new backend
In our duply config file (usually located in ~/.duply/<server name>/config
) I was using the following target until the API broke:
TARGET='gdocs://username:password@example.com/backup-incoming/notmyhostna.me'
In the latest version of duplicity the default backend for gdocs
is now pydrive and gdocs is an alias to that, we don’t need to change anything here. The username and password will be ignored by duplicity because we are going to supply the credentials another way.
Create a file called gdrive
in ~/.duply/<server name>/
with the following content:
client_config_backend: settings
client_config:
client_id: xxx.apps.googleusercontent.com
client_secret: yyyy
save_credentials: True
save_credentials_backend: file
save_credentials_file: gdrive.cache
get_refresh_token: True
Make sure to change client_id
and client_secret
to the values you created earlier and keep the identation like that, otherwise you’ll run into errors later on.
The gdrive.cache
file will be created by duplicity
after logging in for the first time.
Step 7: Run duply / duplicity
To pass the name of the file containing our credentials to duplicity we could use a environment variable called GOOGLE_DRIVE_SETTINGS
like that:
GOOGLE_DRIVE_SETTINGS=gdrive duply <server name> status
If you don’t want to use the variable like that it’s possible to add export GOOGLE_DRIVE_SETTINGS=gdrive
to the top of duply’s configuration file.
This was suggested on the duplicity-talk mailing list and is already present in the new configuration files for duply 1.10.
Running duply <server name> status>
for the first time will start the authentication process with Google by displaying a link to start the OAuth authentication flow. Click on that link, log into your account and copy the string Google presents to you after being logged in successfully. Paste the string into your terminal and press return to finish the login process.
The whole process looks like this:
GOOGLE_DRIVE_SETTINGS=./gdrive duply notmyhostname status
Start duply v1.10, time is 2015-08-15 17:34:56.
...
--- Start running command STATUS at 17:34:56.933 ---
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth...
Enter verification code: yyyyy
Authentication successful.
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sun Apr 12 00:00:04 2015
This should now be the duply output you are already familiar with!
That’s it!
]]>Install the Tor client, make sure you are running a recent version.
emerge -av net-misc/tor
If you want to automatically start Tor at boot just add the service with the default run level: rc-update add tor default
. If you want to start the service right away just use rc-service tor start
.
In this step we are going to add our hidden service to the Tor configuration file called torrc
located in /etc/tor/torrc
. It’s possible to run multiple hidden services, and it’s not a problem to service the same hidden service from multiple ports. In that case just duplicate the HiddenServiceDir
and HiddenServicePort
lines like that:
HiddenServiceDir /usr/local/etc/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:8080
HiddenServiceDir /usr/local/etc/tor/other_hidden_service/
HiddenServicePort 6667 127.0.0.1:6667
HiddenServicePort 22 127.0.0.1:22
We are only going to add one service for now.
Open the configuration file using a text editor (vim /etc/tor/torrc
) and add your hidden service:
#
# Minimal torrc so tor will work out of the box
#
User tor
PIDFile /var/run/tor/tor.pid
Log notice syslog
DataDirectory /var/lib/tor/data
HiddenServiceDir /var/lib/tor/data/hs_notmyhostna.me/
HiddenServicePort 80 127.0.0.1:80
Reload your Tor service after that.
/etc/init.d/tor reload
Once you do that the new hostnames and the private keys for your hidden services will be created and are now located in your DataDirectory
. In my case it’ll look like this:
root@notmyhostname /var/lib/tor$ tree
.
└── data
├── cached-certs
├── cached-microdesc-consensus
├── cached-microdescs
├── cached-microdescs.new
├── hidden_service
│ ├── hostname
│ └── private_key
├── hs_example.com
│ ├── hostname
│ └── private_key
├── hs_notmyhostna.me
│ ├── hostname
│ └── private_key
├── lock
└── state
4 directories, 12 files
Important:
First, Tor will generate a new public/private keypair for your hidden service. The file called private_key
contains your private key, make sure you don’t share that with others. They’ll be able to impersonate your hidden service.
The file hostname
will contain the .onion URL for your new hidden service (That’s the beforementioned public key, or to be specific: The hash of the public key). In my case that’s j5wfzhvvrf2dwm2v.onion. Write that down somewhere because you’ll need that in the next step.
Create a new configuration file for the service you want to serve. In my case I’ll call it tor_notmyhostna.me
just so it’s easier to differentiate between that and the config file for the non-Tor site.
vim /etc/nginx/sites-available/tor_notmyhostna.me could look like this:
server {
listen 127.0.0.1:80;
server_name cc7yjqvfpwtl6uhy.onion;
error_log /var/log/nginx/tor_notmyhostna.me.error.log;
access_log off;
location / {
root /var/www/notmyhostna.me/;
index index.html;
}
}
Make sure to replace the server_name with the hostname
from the earlier step.
Now just enable that new configuration by symlinking it to the sites-available directory and reload nginx:
ln -s /etc/nginx/sites-available/tor_notmyhostna.me /etc/nginx/sites-enabled/tor_notmyhostna.me
/etc/init.d/nginx reload
To test it just download or start TorBrowser and try to access your personal .onion URL. Unfortunately serving a hidden service over https is not yet possible (There are exceptions…). If you want more information about that read this blog post by the Tor developers.
That’s all. Easy!
Source:
Following is a list of all the talks I went to see. Unfortunately there are always some talks happening in parallel but you’ll still be able to watch the stream to catch up. At the end you’ll find a list of talks I liked and would recommend even if it’s not a topic you are very knowledgable about.
If you don’t have time to watch all of them I’d recommend to watch:
Application Token
To get the Application token just create a new application on the Pushover website: https://pushover.net/apps/build.
User Token
The “User Key” is displayed on your user profile.
Cronjob
Add this cronjob and make sure the user has permissions to run the emerge command:
0 20 * * * /bin/zsh /root/tools/updatecheck.sh >/dev/null 2>&1
Boom!
That’s all!
]]>The goal of this short guide is to have automated, encrypted and incremental backups from one server to another remote server. To achieve this we are going to use duplicity and it’s simplified wrapper duply. There are more detailed guides out there but these are the steps I used and they work for me, the sources used for this guide are linked at the bottom.
This is the machine where we are going to store all the backups from the various remote servers on. I added a new user called backup
for this purpose. If you don’t trust the other servers use a different user for each server.
useradd -m -G users,wheel,audio -s /bin/zsh backup
passwd backup
All the backups will be stored in it’s home directory where we’ll now create the following directories and remember the path, we’ll need that at a later stage. Make sure you replace <server name>
with something like the hostname of the server you want to backup in that directory so it’ll be easier to figure out which backup is stored where if you are using multiple servers.
A good example would be: /home/backup/incoming-backups/example.com
/home/backup/incoming-backups/<server name>
This is one of many remote servers we want to backup to the master server.
The first step is to install the dependencies. There’s no stable version available for Gentoo at the moment so you’ll have to unmask the latest version by adding app-backup/duply ~amd64
to your package.keywords
file. Once this is done just install it.
emerge -av app-backup/duply app-backup/duplicity
If you are using Ubuntu/Debian a simple apt-get install duply
should also grab the other dependencies.
Type duply <server name> create
to initialize a new backup. You should probably run this as the root user if you want to backup directories only accessible by root
. The <server name>
is a placeholder for whatever you want to call your backup set. I usually just use the hostname with no spaces, dots or any special characters.
This will create a new directory in your home directory containing two files: conf
and exclude
.
/root/.duply/<server name>/
Because we want to encrypt all our backups so we can store them on an untrusted host we need to create a new GPG key. To do that just run gpg --gen-key
and accept the default options except the keylength which I usually set to 4096
.
You’ll probably see a message like that telling you to generate new entropy:
Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy!
In that case you could run some commands on the server, install updates or just emerge sys-apps/rng-tools
and generate some new entryopy with rngd -r /dev/urandom
. This should usually do the trick.
Once the key is generated it’ll ask you for some information like “Full Name”, “Email” and “Comment”. In my case I use the name of the backup set for the full name, my regular email and the FQDM of the server as the comment. This will make it easier to find the private key for the server later on.
Don’t forget to write down / store the passphrase somewhere safe. We are going to need it for the following step.
Now it’s time to edit the conf
file to fit out needs. There are a lot of comments in that file explaining the various options so I’m just going to go over my settings without explaining all of these.
Open the conf
file and add / uncomment the following values:
If you don’t remember your KEY ID just run: gpg --list-secret-keys
to get a list of all your secret keys in your keychain. The output will look something like that:
sec 4096R/XXXXXXXX 2014-11-30
uid dewey <mail@example.com>
ssb 4096R/YYYYYYYY 2014-11-30
The X’ed value is the GPG_KEY you are looking for, the passphrase is the one you wrote down earlier.
GPG_KEY='XXXXXXXX'
GPG_PW='YOURPASSPHRASE'
GPG_OPTS='--compress-algo=bzip2 --personal-cipher-preferences AES256,AES192'
There are a lot of options to choose from, pick the one your master server supports.
Backup to Master Server
TARGET='rsync://backup@example.com/incoming-backup/<server name>'
Backup to Google Drive (with PyDrive)
This is the new and working method, please follow this guide: https://blog.notmyhostna.me/duplicity-with-pydrive-backend-for-google-drive/
Backup to Google Drive (with gdata) [Deprecated due to Google API changes]
If you want to store your backups in your Google Drive just install the API library via dev-python/gdata
and add the Google Drive target. The user
is the part in front of the @ of your Google (Apps) email address, the password your the one you are using for that account. The path you specify after the domain part will be created automatically.
If you are using Gentoo make sure to switch your Python interpreter to python2.7
(by using eselect python
), 3.x is not supported by the gdata library yet.
TARGET='gdocs://user:password@example.com/backup-incoming/notmyhostna.me'
SOURCE='/'
If you want to manually ignore directories you can just create a .duplicity-ignore
file in that directory and it won’t be included in the backup. This is a good option if you want to back up the entire /home/
directory but not the directory of temporary files in your own home directory.
FILENAME='.duplicity-ignore'
DUPL_PARAMS="$DUPL_PARAMS --exclude-if-present '$FILENAME'"
With these parameters you’ll be able to define how many full or incremental backups will be kept. You should read the documentation / comments and make sure it fits your environment.
MAX_AGE=1M
MAX_FULL_BACKUPS=2
MAX_FULLS_WITH_INCRS=1
MAX_FULLBKP_AGE=2M
DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE "
VOLSIZE=50
DUPL_PARAMS="$DUPL_PARAMS --volsize $VOLSIZE "
VERBOSITY=5
The other important file is called exclude
and it defines the directories included or excluded from your backup. It’s very simple:
- /etc/.git/
+ /etc/
+ /home/user2/imporant.txt
+ /home/dewey/
+ /var/www/
+ /root/.duply/
- **
Every line starting with +
will be included in the backup, everything with -
will be skipped. - **
will exclude everything not matched by the parent rules. The order matters so make sure you don’t exclude /home/someuser/
and later on add /home/someuser/coup.txt
- it won’t be included that way.
We want to be able to log into the master server without entering our password so the backup task will be able to run automatically in the background. We achieve that by copying our public key (id_rsa.pub
to the remote server’s authorized_keys
file. Luckily there’s an easy way to do just that:
ssh-copy-id -i ~/.ssh/id_rsa.pub backup@example.com
Enter your password one last time and we are set. Try to login via ssh to see if it works and if you don’t have to enter the password again we succeeded.
If you want to see the available duply commands just use duply usage
. In our case we are going to use:
duply <server name> backup
for the first full backup. From now on just use
duply <server name> incr
to trigger incremental backups. If you want to see the list of backups stored on the remote host use duply <server name> status
and you’ll see something like this:
Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Sun Nov 30 21:12:22 2014
Chain end time: Tue Dec 2 00:00:04 2014
Number of contained backup sets: 5
Total number of contained volumes: 10
Type of backup set: Time: Num volumes:
Full Sun Nov 30 21:12:22 2014 1
Incremental Sun Nov 30 21:16:34 2014 1
Incremental Sun Nov 30 21:53:00 2014 6
Incremental Mon Dec 1 00:00:03 2014 1
Incremental Tue Dec 2 00:00:04 2014 1
Pretty!
Note: The first backup also exported the private and public gpg keys to the ~/.duply/<server name>
directory. Please don’t skip the section called “Restore” at the end of the guide. We are going to deal with these files there.**
Nobody likes to do things manually so we are going to tell cronjob to do all the heavy lifting for us. Use crontab -e
to view your available cronjobs and add:
0 0 * * 7 /usr/bin/duply /root/.duply/<server name> full_verify_purge --force
0 0 * * 1-6 /usr/bin/duply /root/.duply/<server name> incr
If you want to backup mySQL databases too you’ll have to grab a database dump, move it to some location included in your exclude
file and clean up that location after the backup. Duply got us covered there.
Just create a file called post
and pre
in your .duply/<server name>/
directory.
pre
/usr/bin/mysqldump --all-databases -u root -pXXXXXXXX | gzip -9 > /var/backups/sql/sqldump_$(date +"%d-%m-%Y").sql.gz
post
/bin/rm /var/backups/sql/sqldump_$(date +"%d-%m-%Y").sql.gz
If your dump takes a long time make sure the timestamps are still covered by your post
command and the archives don’t build up in that directory.
Once you created these don’t forget to add that directory to your exclude
file like that:
+ /var/backups/sql/
Important: After the first backup make sure to copy the whole .duply directory to some place safe. It’ll now include your private key.
Your ~/.duply/<server name>
directory should now contain your config files and the public and private encryption key. It’s very important to store this directory somewhere safe. If you don’t have access to this directory it won’t be possible to restore and decrypt the backup.
.
├── conf
├── exclude
├── gpgkey.XXXXXXXX.pub.asc
├── gpgkey.XXXXXXXX.sec.asc
├── post
└── pre
If you don’t have a way to transfer scp
this to a safe place just encrypt it, move it to a public directory and download it.
Generate a password:
openssl rand -base64 32
Create an archive:
tar cvzf duply-<server name>.tar.gz .duply
Encrypt it:
openssl enc -aes-256-cbc -salt -in duply-<server name>.tar.gz -out duply-<server name>.tar.gz.enc -k <your password>
Decrypt it on the target machine:
openssl enc -d -aes-256-cbc -in duply-<server name>.tar.gz.enc -out duply-<server name>.tar.gz -k <your password>
If you want to store the private key in the keychain on your main machine just import gpgkey.XXXXXXXX.sec.asc
to GPG Keychain (Mac only) or do it via the command line:
gpg --allow-secret-key-import --import gpgkey.XXXXXXXX.sec.asc
It’ll now show up if you run gpg --list-secret-keys
on that machine.
Now if you want to restore a server just install duply and duplicity on the new machine, restore the .duply
directory, set up the public key authentication and use the following commands:
Restore a single file:
duply <profile> fetch <src_path> <target_path> [<age>]
Restore a directory:
duply <profile> restore <target_path> [<age>]
The default value for <age>
is $now
, but you may also enter values like 1M
or 10H
, depends on how frequently your backups run.
If you want to take this to the next level and also backup your backup master server you could use something like Tarsnap which I already wrote about some time ago: Backup your server with Tarsnap
That’s it!
.torrent
file for. Nested directories (*cough* collages) are no problem.
Install dependencies listed in the script, create a directory called torrent
or just customize the path in the script and then run node app.js
.
RatioHit|⇒ node app.js
Total size:
694263227494 bytes
662101 MB
647 GB
Number of Torrents: 1728
There are a lot of scripts and applications out there acomplishing the same task, most of them are probably more polished than what I came up with, but there was always something missing which made me switch back to some old unmaintained PrefPane called “ScreenGrab”.
Unfortunately it was FTP only and I wasn’t really keen on running an ftpd on my server any more.
My requirements were (and still are):
Like the comment in that thread suggests most of this would be possible with Automator but where’s the fun in that.
I don’t know what took me that long but I wrote a simple tool powered by Node.JS doing just that.
Grab it here: ScreenUpload
In the future I’m planning on adding a simple web interface running on the remote server which should make it easier to delete screenshots, search for screenshots from a specific time range or just browse the archive. Another neat feature would be drag-to-upload on the web interface.
]]>If you want to run it on a system which isn’t Mac OS you’ll probably have to tweak a few things. Especially the lines containing terminal-notifier
because they make use of the Mac OS notification center. Just replace them with echo
statements or a similar notification framework for your operating system.
To get the notifications to the notification center install you’ll have to install terminal-notifier. You can grab it from Github or install it via Homebrew.
Download the script and make it executable.
git clone https://gist.github.com/d2f364a60e4384b8d44e.git backup-znc-logs
chmod +x backup-znc-logs/backup-znc-logs.sh
Open the script and edit the line which defines the SCRIPT_HOME
. That’s the directory where you’ll create subdirectories for the various servers. In our case we’ll create a directory called “notmyhostna.me” and “example.com” because these are the two servers we want to back up.
cd ~/Documents/Textual\ Logs/ZNC/
mkdir notmyhostna.me
mkdir example.com
Now it’s time to create a config file to tell the script where it should grab the files from and where to store them. We do this by creating an invisible file called .config-backup
in the server directory.
vim ~/Documents/Textual\ Logs/ZNC/notmyhostna.me/.config-backup
Add these lines and costomize them to your needs. (You don’t have to copy the comments) Make sure not to excape the spaces in the path names because rsync doesn’t like this.
# The name of the server which is used in notifications.
SERVERNAME="notmyhostna.me"
# The name of the file where we store the timestamp of the last backup. Just in case you feel like changing it. You probably won't have to change this.
TIMESTAMP_FILE=".last-backup"
# The path and username of the remote server where the logs are stored
PATH_REMOTE="dewey@notmyhostna.me:/home/dewey/.znc/users/dewey/moddata/log/"
# The path the logs should be synced to on the machine the script is running from. `${HOME}` is used instead of `~`.
PATH_LOCAL="${HOME}/Documents/Textual Logs/ZNC/notmyhostna.me/"
You have to create a file like this for every server you want to backup. For this to work it’s required that you are using ssh keys and not passwords.
Note: Per default the script will only create a backup every 24h. If you want to change that search for the value 86400
and change that to something you prefer.
If you run the script and the last backup isn’t older than 24h the output you’ll see looks like this:
Scripts|⇒ ./backup-znc-logs.sh
example.com : Last backup < 24h old. Do nothing.
notmyhostna.me : Last backup < 24h old. Do nothing.
If you want to automatically run the script you could use something like this: http://stackoverflow.com/questions/9522324/running-python-in-background-on-os-x/9523030#9523030
]]>Install munin from package sources
apt-get install munin-node
Add master node to allowed IPs
vim etc/munin/munin-node.conf
Unfortunately it’s only allowed to add IPs and not hostname. The format you’ll have to follow is:
allow ^xxx\.xxx\.xxx\.xxx$
Clone the official plugin repository
We are going to clone the repository with the latest third-party plugins to /etc/munin/plugins-get
. Once you’ve done this just symlink the plugins you want to use into Munin’s plugin directory. This also makes it very easy to update.
git clone https://github.com/munin-monitoring/contrib.git /etc/munin/plugins-git
In this example we are going to symlink a plugin which accepts an additional parameter. In our case the network interface on this OpenVZ server.
ln -s /etc/munin/plugins-git/plugins/network/vnstat_ /etc/munin/plugins/vnstat_venet0
]]>
*.notmyhostna.me
domains and here’s how I did it.
First you need to buy a wildcard certificate, I bought one from cheapsslsecurity.com.
If you want to secure subdomains you’ll need to spend a little bit more and go for one of their wildcard certificates listed under the “Secure Sub-Domains” section. I went for the cheapest one, costing me ~130$ for two years, which is reasonable.
During the checkout process they ask you for your “CSR” - and that’s what we are going to give them. Because we are now going to generate a bunch of files and want to store them all in our nginx directory we are going to create a new directory in the nginx directory called ssl
.
Navigate to your nginx directory usually located at /etc/nginx/
and create a new subfolder called ssl
.
In public key infrastructure (PKI) systems, a certificate signing request (also CSR or certification request) is a message sent from an applicant to a certificate authority in order to apply for a digital identity certificate.
To generate the CSR and your private key run the following command while being located at /etc/nginx/ssl
).
openssl req -nodes -newkey rsa:2048 -keyout notmyhostna.me.key -out notmyhostna.me.csr
During this process the key tool will ask you for some information:
Country Name (2 letter code) [AU]: AT
...
Common Name (e.g. server FQDN or YOUR name) []: *.notmyhostna.me
The important part here is to set the Common Name to *.notmyhostna.me
. This will allow us to use this certificate for multiple subdomains in the future.
The result of this step will be two files called notmyhostna.me.csr
and notmyhostna.me.key
in your current directory. Now open the csr file and copy/paste the content into cheapsslsecurity’s CSR form.
I wish I’d be kidding when I tell you the mail looks like some cheap phishing mail but it does. With broken images and everything.
The Comodo site where it’s redirecting you isn’t any better so get used to it:
The next mail you’ll get is one with an attached zip file containing the root certificates and your purchased wildcard certificate:
Transfer this zip file to your server, unzip it and move the files to your ssl
directory.
Build the certificate chain for nginx with the following command:
cat STAR_notmyhostna_me.crt PositiveSSLCA2.crt AddTrustExternalCARoot.crt >> notmyhostna.me.crt
(The order matters, so don’t get creative here)
Now we are going to setup nginx. Create another child directory called /etc/nginx/ssl/notmyhostna.me
and move notmyhostna.me.crt
and notmyhostna.me.key
to this directory. These are the only files nginx will need.
I usually have one main config file for every domain, and multiple sub configurations for the subdomains. I think it makes dealing with multiple configurations for the different subdomains easier than having one massive file containing everything.
The “main” configuration file
server {
listen 443 default_server ssl;
# _; is used for the default vHost
server_name _;
ssl_certificate /etc/nginx/ssl/notmyhostna.me/notmyhostna.me.crt;
ssl_certificate_key /etc/nginx/ssl/notmyhostna.me/notmyhostna.me.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Perfect Forward Security
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS +RC4 RC4";
}
If you want to read a more in-depth description of these options follow this link: http://axiacore.com/blog/enable-perfect-forward-secrecy-nginx/
That’s our main server block listening on the SSL port 443. It’ll use the SSL settings we defined in the server block above (The “default” server). Add this to the same file.
server {
listen 443;
server_name notmyhostna.me;
error_log /var/log/nginx/notmyhostna.me.error.ssl.log;
access_log /var/log/nginx/notmyhostna.me.access.ssl.log;
root /var/www/notmyhostna.me/;
index index.html;
}
If you want to redirect all requests on port 80 to the SSL version add another server block with a redirect to the same file:
server {
listen 80;
server_name notmyhostna.me;
rewrite ^ https://$server_name$request_uri? permanent;
}
A configuration file for a subdomain
This configuration file will still use our default server we setup in the main file that’s why we just need our regular server blocks here. Ignore the proxy_
settings, they are just used for this blog because it’s using nginx as a reverse proxy for the NodeJS backend.
server {
listen 443;
server_name blog.notmyhostna.me;
error_log /var/log/nginx/blog.notmyhostna.me.error.ssl.log;
access_log /var/log/nginx/blog.notmyhostna.me.access.ssl.log;
root /var/www/blog.notmyhostna.me/;
index index.html;
location / {
proxy_pass http://localhost:2368/;
proxy_set_header Host $host;
proxy_buffering off;
autoindex off;
}
}
server {
listen 80;
server_name blog.notmyhostna.me;
rewrite ^ https://$server_name$request_uri? permanent;
}
Now, after enabling the sites (symlinking from sites-available
to sites-enabled
) use nginx -t
to check if there are any errors in the config and if there are none restart nginx and your sites should be available on https.
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Optional last step is to check your SSL setup via SSL Labs Server Test … zwecks da security warads gwesen.
This setup should result in an A
.
Further links if you want to know why things are done this way:
]]>The author of the book is Malkit Shoshan and the design was done by Joost Grootens who also designed a lot of other atlases like the Metropolitan World Atlas, compiling interesting facts about different metropolises through an easy to grasp system of orange dots.
]]>The instructions on how to install this tool are on the Github page of the project. If you are on a Mac just use homebrew as usual.
brew tap laurent22/massren
brew install massren
This will tap a specialized repository because it’s not in the main Homebrew formulas (yet?). One of the main dependencies is go
so this will probably take a minute to install.
Now just tell massren to use Vim as the default editor with:
massren --config editor "vim"
If you are already familiar with string manipulation in Vim you already know how to rename files with massren.
Usage:
The working directory:
demo|⇒ ls
blah0.txt~ blah2.txt~ blah3.txt~ blah4.txt~
Navigate to the directory where your files are located and type massren
. It’ll open the regular Vim editor and you will see a list of all the files in the directory.
Our goal is to remove all these nasty ~
from the filename of these automatically created backup files.
Now we are just going to use Vim expression we are all familiar with already. If you are not there’s a wiki explaining the various expressions.
Just type the following expression (Don’t forget to escape the tilde) and confirm by pressing Return
.
:%s/txt\~/txt/g
If you see something like “4 substitutions on 4 lines” it worked. Now we just have to save our “file” like we always do. Use :wq
to save and quit Vim.
The output should look like this:
demo|⇒ ls
blah0.txt~ blah2.txt~ blah3.txt~ blah4.txt~
demo|⇒ massren
massren: Waiting for file list to be saved... (Press Ctrl + C to abort)
demo|⇒ ls
blah0.txt blah2.txt blah3.txt blah4.txt
demo|⇒
No ~
. Perfect!
The End.
]]>I wasn’t able to find anything free and decent looking so I went back and gave TestDisk another look. Browsing their wiki I stumbled upon PhotoRec which is a tool dedicated for the exact task I was going to need it for. And the best part: It already ships with TestDisk per default.
If you are using homebrew (and you should!) just install it with a quick and painless
brew install testdisk
And we are ready to use it.
If you are not a homebrew fan just download the version for your operating system from the downloads page and unpack it with double click or
tar -xvf testdisk-x.xx.mac_intel.tar.bz2
Navigate to the directory and run it with sudo ./photorec
If you have installed it with homebrew get yourself a new and shiny terminal window and run sudo photorec
. It’ll prompt for your password. After that you’ll see an interface like the following:
Now just select the disk you want to recover files from, navigate with the arrow keys and proceed with Enter
.
Select the partition you want to recover files from and proceed with Enter
In the next step you’ll have to select the filesystem type. If it’s a memory card for a camera it’s usually Other
. Proceed with Enter
.
The next step will ask you which strategy the tool is going to use. Just searching the free space or everything. Select Whole
and continue with Enter
.
Once we have done that we’ll have to choose the location where PhotoRec is restoring the files to. Navigate through your file system and press C
and the restore process will start. It’ll look like this and will take quite some time, depending on how much storage it needs to go through.
The displayed ETA is usually pretty accurate. Once this is done it’ll display a summary on how many files it was able to recover.
You’ll find all your restored files in a directory called recup_dir.x
in the destination directory you specified in the previous step.
The End.
]]>Have you ever heard of Austria? If you have, what are the first three words that come to your mind?
Because I have text files (IRC logs) of all these conversations it almost asks for being analysed so we’ll use some basic shell scripting to extract the relevant lines from the logs. It’s not trivial to automize that because the way the answers are given is in no way uniform. Instead we are just going to grab the whole section and manually clean out the irrelevant answers to the other questions.
Once we have a list of words which looks like this:
schnitzel
AKG
red, white
Vienna
German, Europe
We’ll just going to use tr
to do some basic text analysis:
cat answers.txt | tr -d '[:punct:]' | tr ' ' '\n' | tr 'A-Z' 'a-z' | sort | uniq -c | sort -rn
We are removing the punctuation, replace whitespace with a linebreak, convert everything to lowercase, sort alphabetically, filter out dupes and in the last step we are going to reverse the sort order and display the numeric value (The frequency) of the string.
Sometimes you don’t want to split the lines if there’s a space between the words; use the following command in that case:
cat answers.txt | tr 'A-Z' 'a-z' | sort | uniq -c | sort -rn
The end result should look something like this:
10 vienna
6 europe
5 hitler
3 kangaroo
3 german
3 beer
3 australia
3 alps
2 wien
2 terminator
2 sydney
2 schnitzel
2 mozart
2 mountains
2 country
2 arnold
2 apfelstrudel
If that does pique your interest check out the following link for a more in-depth explanation:
http://williamjturkel.net/2013/06/15/basic-text-analysis-with-command-line-tools-in-linux/
The End.
]]>cmd + R
)
Coincidentally, that’s also a shortcut used by Textual to rearrange the channels alphabetically, which is usually not what you want.
The likelihood of hitting cmd + R
which Textual still being the active window and not the browser/inspector is very high and that’s why we are going to reassign the shortcut to something less disastrous. (Yes, the correct order of IRC channels is serious business)
Restoring your old channel order:
s/o to the people with working backups!
Luckily Textual stores the way the channels are sorted in the application’s plist file so we are going to grab the old one from our TimeMachine backup and replace the new one.
Open your terminal and navigate to path where the correct plist file is stored. The path usually looks a bit like this:
cd /Volumes/MobileBackup/Backups.backupdb/monki/2014-03-07-125854/Macintosh\ HD/Users/dewey/Library/Containers/com.codeux.irc.textual/Data/Library/Preferences
There are two files directly related to Textual in there. Use ls | grep textual
to list them:
Preferences|⇒ ls | grep textual
com.codeux.irc.textual.LSSharedFileList.plist
com.codeux.irc.textual.plist
We only need to restore the second one.
Make sure you quit Textual at this point and then use cp
to copy/overwrite the “new” (wrong channel order) with the old plist file containing the correct order:
cp com.codeux.irc.textual.plist ~/Library/Containers/com.codeux.irc.textual/Data/Library/Preferences/com.codeux.irc.textual.plist
Now we are done here, but due to Mavericks caching plist files and the inner workings on how to flush the cache manually not being known you’ll have to reboot your computer to update the cache.
Prevent this from happening in the future:
It’s great that we are able to restore these old settings but how can we prevent this from happening in the future? Easy, we’ll just reassign the Textual shortcut like mentioned earlier.
Open “System Preferences / Keyboard / Shortcuts” and reassign cmd + R
to Main Window
Note: If you really need the rearrange shortcut it’s possible to do it the other way round and just assign some obscure shortcut so you don’t hit it accidentally. To do that just use “Sort Channel List” instead of “Main Window” and a shortcut of your choice.
There’s also a neat way to list all your custom shortcuts we could use to verify that our new shortcut is in place. Just run defaults find NSUserKeyEquivalents
and you’ll get something like this:
~|⇒ defaults find NSUserKeyEquivalents
Found 1 keys in domain 'com.codeux.irc.textual': {
NSUserKeyEquivalents = {
"Main Window" = "@r";
"Next Unread Channel" = "@7";
};
}
As you can see everything worked as expected. Happy reloading!
The End.
]]>The problem:
For some unknown reason pywhatauto is replacing my working cookie file for BTN with an incomplete one. Probably due to a failed authentication with the site. (I blame Cloudflare and the ongoing DDoS). So we have a problem which occurs repeatedly. Repeatedly sounds like a perfect use case for a cronjob
to me.
The fix:
*/5 * * * * /home/dewey/fixcookies.sh > /dev/null 2>&1
Don’t forget to make the script executable with chmod +x fixcookies.sh
.
The End.
]]>Unfortunately the version of sort
shipped with Mac OS doesn’t include the -R
/ --random-sort
option and shuf
doesn’t come by default. We’ll have to install GNU’s coreutils to use them.
If you are using homebrew it’s as simple as running brew install coreutils
. Once installed it’s available via gsort
. All the GNU coreutils are prefixed with g
so they are not conflicting with the system provided tools.
ls Movies/ | gshuf -n 1
It’s also possible to be used with find
if you need more flexibility (Only want it to choose between all 720p *.mkv files? No problem!
find Movies/*.mkv -name "*720p*" -type f | gshuf -n 1
And because I’m lazy I wrote a little shell script* with suggests a random movie based on some input parameters.
And this is how it works:
Movies|⇒ popcorntime -p ~/Movies -e mkv -q 720p
You should watch:
Where.the.Trail.Ends.2012.720p.Bluray.x264-ESiR.mkv
If you want to call it with popcorntime
instead of ./popcorntime.sh
you’ll have to add an alias to your ~/.zshrc
like that:
alias popcorntime="sh ~/path/to/popcorntime.sh"
Make sure the script is executable with:
chmod +x ~/path/to/popcorntime.sh
The End.
*As usual writing a quick zsh function turned into a shell script and that task turned into a 2h BashFAQ binge.
Relevant XKCD: https://xkcd.com/1319/
]]>If you want to run OpenVPN within a OpenVZ container you’ll have to setup the iptables
rules for the correct network interface (You don’t say!). Most likely it’s called venet0
. You can double check this with ifconfig
as root or just
ifconfig -a | sed 's/[ \t].*//;/^\(lo\|\)$/d'
to get a list of network interfaces. The proper iptables rules should look like this:
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT
iptables -A FORWARD -j REJECT
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
iptables -A INPUT -i tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -j ACCEPT
iptables -A INPUT -i tap+ -j ACCEPT
iptables -A FORWARD -i tap+ -j ACCEPT
Don’t forget to add them to your /etc/rc.local
file to make them persistent across reboots.
The End.
]]>One of Calibres quirks is that it’s updated frequently but isn’t coming with an auto updater. Your only way to update is to re-download the ~82MB binary and replace your version with the new one.
I usually try to use cask for these kind of apps to make the process of updating them less annoying. In this case cask isn’t working as expected because Calibre switched to just using latest
instead of explicit version numbers. This makes it (currently) impossible for cask to figure the right version to install.
What we are going to do is just removing the old calibre-latest
files from the cask cache, remove the old Calibre.app
from our system and reinstall the new one.
This isn’t very elegant but it’s a workaround until they figured out how to handle updates for applications installed via cask consistently.
Add this to your ~/.zshrc
:
function update-cask {
BREW_CACHE=$(brew --cache)
rm -r $BREW_CACHE/calibre-latest
brew cask uninstall calibre
brew cask install calibre
echo "Calibre updated to latest version! Updating other apps now..."
for c in `brew cask list`; do ! brew cask info $c | grep -qF "Not installed" || brew cask install $c; done
}
Now run source ~/.zshrc
to reload your shell config and it’ll be available via update-cask
in your Terminal.
There are multiple ways to keep and maintain backups on a linux box (rsnapshot, rsync,…). I was searching for a service to create and store encrypted offsite backups which is reasonable cheap. I stumbled upon Tarsnap a number of times now and finally decided to give it a shot.
I just want to have a couple of revisions of a small number of directories in case of a hardware or user fail. Tarsnap’s default script isn’t covering the revision/rolling backup feature I need, that’s why I’m using Tarsnap-generations. It’s just a simple script to create new archives from a list of directories and prune old archives after a given timespan. Think of it as a rsnapshot for Tarsnap.
Note: This is not a tutorial, this is just to document my setup for future reference.
For it to work you’ll need to set one parameter which isn’t set in the default Tarsnap config so just create a .tarsnaprc
in your ~
and add
humanize-numbers
You could also exclude some directories and files from your backup path. To do this just use the exclude
parameter like this:
exclude /.ssh/
exclude /var/logs
The directories you want to backup should be added to a file called tarsnap.folders
in your home directory. (If you are using the same cronjob from Tarsnap-generation’s example on Github).
/home/dewey/.znc
/home/dewey/.zsh*
/var/www
/etc
/usr/share
/var/lib/bitlbee
If you manually run the script without waiting for a cronjob to run it, it’ll look like this:
tarsnap: Removing leading '/' from member names Total size Compressed size All archives 2.1 GB 638 MB (unique data) 1.2 GB 371 MB This archive 6.2 kB 2.8 kB New data 6.2 kB 2.8 kB 20140202-18-HOURLY-at-notmyhostname-/var/lib/bitlbee backup done. Verifying backups, please wait. 20140202-18-HOURLY-at-notmyhostname-/var/lib/bitlbee backup OK. Finding backups to be deleted.
Once the script ran you’ll see all your archived backups with the tarsnap --list-archives
command.
root@at-notmyhostname ~$ tarsnap --list-archives
20140202-18-HOURLY-at-notmyhostname-/home/dewey/.znc
20140202-18-HOURLY-at-notmyhostname-/home/dewey/.config
20140202-18-HOURLY-at-notmyhostname-/home/dewey/.zshrc
20140202-18-HOURLY-at-notmyhostname-/var/lib/bitlbee
Resources:
]]>After playing around for a while my current setup looks like this:
There are a lot of parameters to tailor it to your needs in the Newsbeuter documentation.
My current .newsbeuter/config
looks like this:
Installation:
If you are on Mac OS 10.9.x and want to install it via homebrew you’ll have to edit the formula so it’s using the c++11
branch of the repository because there’s a namespace problem with the current branch.
To do that just run a brew edit newsbeuter
and replace the line containing head
with
head 'https://github.com/akrennmair/newsbeuter.git', :branch => 'c++11'
It should look like this at the time of this writing:
homepage 'http://newsbeuter.org/'
url 'http://newsbeuter.org/downloads/newsbeuter-2.7.tar.gz'
sha1 'e49e00b57b98dacc95ce73ddaba91748665e992c'
head 'https://github.com/akrennmair/newsbeuter.git', :branch => 'c++11'
depends_on 'pkg-config' => :build
...
After editing the formula just brew install homebrew
as usual and start editing your config.
Resources:
]]>cd ~/sites
mkdir example.com
cd example.com
git init
> Initialized empty Git repository in /Users/dewey/sites/example.com/.git/
This is the bare repository we are going to push to.
cd ~/git/
mkdir example.com.git && cd example.com.git
git init --bare
> Initialized empty Git repository in /home/user/git/example.com.git/
This hook will run after each push to the bare repository, checking out the changes to the working tree (Which isn’t located in the bare repository because that’s how it works). The working tree will have no .git
directory (That’s how we want it) and will be served by nginx.
vim ~/git/example.com/hooks/post-receive
With this content:
#!/bin/sh
echo "********************"
echo "Post receive hook: Updating website"
echo "********************"
export GIT_WORK_TREE=/home/user/www/example.com
cd $GIT_WORK_TREE
git --work-tree=/usr/share/nginx/www/example.com --git-dir=$HOME/git/example.com.git pull
git --work-tree=/usr/share/nginx/www/example.com --git-dir=$HOME/git/example.com.git checkout master -f
This is the directory where all the files should end up after each push. We will create it within nginx’s document root so it’ll be served up by the webserver.
cd /usr/share/nginx/www/
mkdir example.com
We have to add our regular user to the www-data
group which owns the files in nginx’s document root so we’ll be able to checkout the files.
sudo chown www-data /usr/share/nginx/www/example.com
sudo usermod -a -G www-data user
sudo chgrp -R www-data /usr/share/nginx/www/example.com
sudo chmod -R g+w /usr/share/nginx/www/example.com
Now we have to add the remote repository to our local repository so we can push to it.
Our local repository’s .git/config
should look like this:
[remote "production"]
url = ssh://user@example.com/~/git/example.com.git
fetch = +refs/heads/*:refs/remotes/production/*
gtSSHKey = /Users/user/.ssh/id_rsa
[branch "master"]
remote = production
merge = refs/heads/master
Once this is added git remote
will show the remote repository:
git remote
> production
Pushing via
git push production
should now be possible.
Resources:
]]>