I enjoy cooking and baking. Like, a lot. I cook dinner for myself every night. When I have guests visiting me, I will cook them lunch or dinner and serve some fresh desserts after that.
I've done this since I moved out from my parents place, almost 10 years ago. But the most agonizing part was facing the same question day after day: What's for dinner tonight?
A family friend mentioned once, that they do meal planning. Each Monday they sit down and discuss and plan together what they will eat for lunch and dinner the coming week.
When I first heard the story in 2015, I thought that's just something that older folks do (the couple is in their seventies). Only years later I noticed that thinking about "what's for dinner?" occupied too much of my mind each day.
At the start of 2021 I wanted to change that.
As mentioned in the title, I use Things 3 to do my meal planning, but you can use any tool you like for this: a (digital) calendar, a piece of paper, a blackboard, you name it.
In Things, I created a recurring project called "Meal Plan Wxx" and a recurring task called "Create meal plan for the next week". Both project and task automatically show up in my Today view each Friday.
Each Friday evening I sit down and work on the project and task. I rename the project so the title represents the upcoming week (for example "Meal Plan W11"). And as you can see from the screenshot, the project comes prefilled with sections for each day of the week and 2 tasks each for lunch and dinner.
The notes field of the recurring project is also populated with my favourite dishes or recipes that are in my regular rotation.
So next is the hardest part: deciding what to cook. As I also do meal prep1 deciding what is for lunch is a bit easier, but this step always takes me 10 to 15 minutes.
When I think "I can't make that again next week", I open up one of my cook books or my trusty old binding folder full of collected recipes and pick a few dishes I would like to try next week.
I then update the "TBD Lunch" and "TBD Dinner" tasks with the name of the dishes. When I have planned lunch or dinner with friends in a restaurant or somewhere else, I just write down "Lunch with Jane" or "Dinner with Family".
A full meal plan would look something like this:
As I also keep my shopping list in Things, I open up another window and put both projects next to each other.
Then I go through my fridge, freezer and pantry and the meal plan for next week and put all the missing ingredients in my shopping list. If the time allows it, I take my bag and walk to my local supermarket around the corner and buy all the ingredients for the next week. If a recipe calls for something fresh like spinach or salad, I pop by the supermarket on my way back from work on the respective day.
In 90% of the cases I have done my entire meal planning in like 90 minutes. From writing down what I want to cook and eat next week to doing the groceries and put everything away in storage.
I could write my meal plan on a piece of paper, but I keep it in Things for a simple reason: my shopping list is already in that app.
When I'm at the grocery store I can switch from my "Shopping List" project to the meal plan project and double check that I didn't miss an ingredient. Besides, Things is also aways open on all my devices, so checking the current meal plan is always just a few key strokes away.
When cooking, I get my inspiration often from blogs or YouTube channels. If you want to learn some basics I can recommend the 101 series by NYT cooking on rice, eggs or chicken
The YouTube channels of Alison Roman or Ethan Chlebowski are also always great to get some inspiration or learning new techniques.
I don't spend the whole Sunday afternoon cooking 4-5 meals, but always cook 1 or 2 portions more each night, so I have leftovers for the next day. β©
A neat but lesser-known feature of GitHub Actions is reusable workflows.
As the name might suggest, you can collect your most-used workflows in a public repository once and reuse them in your other projects β public or private. (Check out this section of a talk I gave in 2023 about this feature to learn more.)
A couple of weeks ago, I set up a reusable-workflows repository for my personal projects and made that repository public: stefanzweifel/reusable-workflows.
In this post, I would like to highlight some of my most cherished workflows and explain why I keep some types of workflows out of the reusable-workflows repository.
These are the low-hanging fruits of automation: making sure my code is formatted correctly.
I use Laravel Pint or php-cs-fixer in my projects. So, I created a reusable workflow for each of them.
In my PHP projects, I now add a .github/workflows/laravel-pint-fixer.yml
file with the following content:
name: Laravel Pint
on:
pull_request:
push:
branches:
- main
jobs:
pint:
uses: stefanzweifel/reusable-workflows/.github/workflows/laravel-pint-fixer.yml@main
This will trigger the workflow on every push to main
or when a commit is pushed to a pull request.
If wrongly formatted code has been pushed, the workflow will fix that and push a commit back to the repository.
Under the hood the workflow uses my git-auto-commit-action to detect the changes. The resulting commit won't trigger a re-run of the test suite. In my opinion a differently formatted piece of code should not have an effect on the results of a test suite.
Next I would like to highlight the workflows I use in my public and private repositories to keep them up-to-date and automate chores.
My auto-merge-dependabot-pr.yml
workflow does what the file-name suggests. It automatically merges a Dependabot pull request when the test-workflow succeeds. (I've written a blog post on how this workflow works before).
The release-drafter.yml
and update-changelog.yml
workflows are primarily used in my public projects.
With release-drafter.yml
I build up my release notes based on pull request labels. (I've written in the past, on how I use labels like changelog:added
or changelog:removed
to categorize pull requests into the right "changelog category").
update-changelog.yml
is triggered, when I create a new release within GitHub. It will take the body of the release and place it in the right spot inside of CHANGELOG.md
. Links to compare views will also be updated.
All three workflows automate the tedious admin work of keeping my projects up to date.
And the changelog workflows ensure that all my packages have a great history of all the changes made to them. My part of being a good open source maintainer.
The backup-restore.yml
workflow is a recent addition to my collection.
On a schedule, the latest database backup is being downloaded, decrypted and imported into a MySQL database. Checks ensure that the database backup is healthy and complete.
In my project, all I have to do is create this workflow and add variables and secrets to access the database backup from an S3 bucket.
name: backup
on:
workflow_dispatch:
schedule:
- cron: "0 14 1 * *"
jobs:
restore:
uses: stefanzweifel/reusable-workflows/.github/workflows/backup-restore.yml@main
with:
# Value of `APP_NAME` env variable. Used to locate backup on remote disk.
app_name: 'My Laravel App'
php_version: '8.3'
secrets:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
AWS_BACKUP_BUCKET: ${{ secrets.AWS_BACKUP_BUCKET }}
BACKUP_ARCHIVE_PASSWORD: ${{ secrets.BACKUP_ARCHIVE_PASSWORD }}
You can learn more about the history of this workflow in the "Introducing laravel-backup-restore" blog post.
In my reusable-workflows-repository you won't find a generic "run my test suite" workflow.
Running the test suite always differs from project to project. Some apps use MySQL, PostgreSQL or sqlite. Some require specific PHP extensions or software installed on the server.
Creating a workflow that is flexible enough for all these requirements is possible, but would lead to a complex YAML configuration filled with if-clauses and βboolean options.
I like to keep things simple and easy to understand. Each project gets its own test workflow.
If your organisation or team works with many GitHub repositories and you copy and paste workflows from one repo to the other or you've asked yourself "Where is the most recent version of this workflow?" I suggest you give reusable-workflows a try.
In addition to less duplicated code, you can version your reusable workflows. In the examples shown in this blog post, I always referenced the @main
version of a workflow. But you could introduce a basic semantic versioning system, that lets other developers in your organisation use @v1
or @v2
of your workflows.
My collection of workflows still grows.
Next up on my list is adding Duncan McClean's post-release-comments action to a workflow, so folks get notified, when the changes from their pull request was released.
]]>I had a blast. Besides listening to great talks, I've met many familiar faces from the community I've only known from social media or GitHub. (I even skipped a couple of talks, as I wanted to hang out and talk with others)
Shaking hands and talking with folks in the flesh was the biggest highlight for me. I wish I would have been less nervous, so that I would have asked proper, good questions; instead of awkwardly standing there. π
Will be more prepared for my next conference.
The venue was awesome as well. A big music hall filled with comfy seats and various places and levels to sit.
The food and drinks were pretty good as well. So were the spontaneous dinners with other Laravel developers.
Definitely want to attend a possible Laracon EU 2025, or even another Laracon event in Europe later this year.
]]>For more details checkout my /uses page or defaults.rknight.me for a list of other "App Defaults" blog posts.
Like last year, I won't tell you about the nightmare-dreams I had or what food I ate. That's dull.
Ignoring my "wake-up call into reality", 2023 has been a great year.
I spent many days and evenings with friends or forged new friendships. Some of these new friendships formed on loose connections I had for years, but which started to flourish this year.
Visiting the Laravel Switzerland Meetups in different cities was a highlight as well. It's a booster shot of energy seeing more folks interested in Laravel and web development in general. Also, gave my second talk in June about GitHub Actions. If you're curious, check out the recording.
My social media habits took a turn. After third-party Twitter clients vanished, my Twitter usage fell of a cliff. I peek at my feed every other day, but I'm more of a silent observer now. Mastodon and books are my new go-tos.
Mastodon feels like a cozy corner of the internet. Following PHP and frontend developers and some generally cool folks there makes the feed more interesting than Twitter, for me at least. Posts and feedback feels more genuine.
Maybe because I don't follow new folks on Twitter, my feed feels more and more like an echo chamber. The same thoughts just reverberating in a small space.
Last year, I picked up a new habit β walking 5000 steps daily. Kept it going and bumped it up to 7000-8000 steps. Still a challenge, but it's a sweet spot.
Every night, an hour before going to bed, I take a 5-15 minute stroll. Sometimes I listen to a podcast or an audiobook; other times, I listen to the bustling sounds of my neighborhood.
Celebrated my 10-year anniversary at 2media this year.
The year was filled with lots of changes. Collaborating more closely with our parent and sister companies led to new projects and increased meetings.
This reinforced my preference for small teams, where each member takes initiative, blurring traditional role boundaries. In other words, I thrive when working with individuals who think collectively as a team, rather than in silos.
Freelancing dreams took a back seat this year. Still would like to help teams or companies with my automation or Laravel knowledge in the future. But that topic is not on the front burner right now.
Another milestone this year was that screeenly reached +$100 monthly recurring revenue. π
All in all, I'm excited about what 2024 has in store for me and how my work situation will look like at the end of the next year.
2023 was a year without many new projects. In my eyes, a good thing.
One new package born this year is laravel-backup-restore. Helps you restore a database backup hassle-free.
A significant portion of my open-source time this year involved adding support for Node 18 to sidecar-browsershot. Not a massive pull request, but getting all the gears turning took a good 20 hours.
Out of this work came a new AWS Lambda layer: sidecar-browsershot-layer. (Need to set up automation in this project, to keep Chromium up to date. Future-me will find a good solution here.)
A lot of time was spent on maintenance work as well: keeping packages up to date and, more time-consuming, answering issues and discussions.
In 2024, I need to find more hands for these bigger projects. It's not that I get hundreds of new issues every week, but each issue is for a complex problem in an environment which I can't control (eg. GitHub Actions). (If you, dear reader, are interested in helping out here, please let me know!)
Beside the mentioned maintenance work, I started working on a new side project. It has the working title "Untitled Read It Later App" and is my personal spin on apps like Instapaper, Readwise, Pocket, or Matter.
I want to invite interested friends to an early alpha in the first half of 2024.
Will share more about the progress on Mastodon and on Twitter.
Looking at the state of the world, 2023 wasn't a great year either. Tensions, conflicts, inflation, climate catastrophe.
I'm grateful for health and a still intact body. Now wrapping up the year with reading, binging Lord of the Rings, and cooking and baking.
See you next year.
]]>October 2023 was simultaneously one of the worst and one of the better months of my life.
First the good parts: I attended my first developer conference β Full Stack Europe β in Antwerp, Belgium.
By night train, I travelled from Zurich to Amsterdam and spent a beautiful day walking through the city. The next day, I took a train to Antwerp. There I joined my group of friends from the Laravel Switzerland Meetup where we attended two days of talks all around web development.
The talks and atmosphere were great. The highlight for me was seeing internet friends β which I've known for years β in person for the first time and spending the evenings with them. (Can't wait to see some of them again at Laracon EU in February 2024).
The worst part of October was learning that one of my parents β my mum β fell suddenly very ill.
From one night to the next, she started to vomit, getting high fever, chills, became dehydrated and incredibly low energy and sleepy.
After a turbulent night, an emergency visit to a doctors clinic, antibiotics, calls with a paramedic family member, she was admitted to the hospital.
After a night on the intensive care unit and a couple of days in an isolation room, we finally got a diagnosis: sepsis (blood poisoning) and a potential for kidney failure.
For unknown reasons, a part of her intestine has gotten inflamed, which lead to the sepsis. Her white blood cells were quickly being destroyed/used up and the body couldn't produce new cells quickly enough. All the energy in her body was now being used to survive.
But as the doctors now knew, what was going on, they could apply a medication therapy and things were already looking up and getting better. But slowly, very slowly.
While recovering in the hospital, they doctors made some ultrasound scans and β by luck β found a small tumor in another part of her body.1
Through the wonders of modern medicine, she recovered relatively quickly β more on that later β and could leave the hospital after a 8 days.
Now, 8 weeks after the incident, she is basically back to normal. She goes on her regular walks around her neighboorhood and β if the weather and temperature allow β she goes on her three daily bike rides around her village.
This was the first time in my life, that a parent was in a life threatening situation.
At first, I dismissed the signals and thought this is nothing serious. Like many people, they have a cough or a runny nose from time to time. No big deal usually. (And they went to a gathering just a day before; maybe someone has infected them with something like the flu.)
Only on day 2 I got worried; quite a lot actually.
My Apple Watch reminded me, that my resting heart rate during the whole period jumped by 10 points from 55 to 65. I was anxious, unable to concentrate on my work. I was generally in a sad mood.
My mind wondered into scenarios where my mother wouldn't survive this and I would loose her. Those were the saddest moments where I just let the thoughts wash over me and I let all emotions out.
However, I was thankful that I've developed a good habit in the past few years of going to the gym every morning. Lifting heavy weights for an hour; run or bike a couple of kilometers each morning aired out my brain and helped me cope with this situation.
Most thankful I'm for my father. He listened to his gut reaction and thought "this isn't something that will resolve on its own".
At first, my mother didn't want to go to the hospital. She also thought, this is something minor that can be cured by staying at home. But my father managed to convince her otherwise and brought her to the emergency room of the nearest hospital where a team of professionals could care for her.
After her situation stabilisied, a doctor questioned my mother, if she is in the hospital on her own accord or if she was forced to come here by someone.
She told the doctor, that her husband brought her here. The doctor mentioned, that her husband made the right choice, as she probably wouldn't have survived another day or two and home.
He saved her life.
I'm also eternally grateful for my cousin P, who works as a paramedic. My father and I could talk to and text her about the ongoing situation, she made suggestions what to do next, what to ask doctors. She also asked how the recovery is progressing, and what is coming next.
She also gave me a very good advice: "Don't google too much".
After we got the diagnosis, P told me, that my mother would now go through different phases of recovery and that X and Y will happen. She sent me links to medical dictionaries that explain everything and told me, to "not look further into it; you're Mum will be okay".
Only now β weeks later β I started to search the web for "sepsis" and found Reddit communities where folks wrote about the loss of a family member due to a too-late diagnosed sepsis.
I learned how deadly a sepsis can be and how fast everything can go.
This reinforced the gratefulness I have, that she survived this and that my father had the quick reaction, to bring her to the hospital. And obviously to the entire hospital staff who treated her.
As mentioned, she recovered quickly. A common trait in my immediate family. My mother has quite the willpower. If she wants something get done, she will put anything behind it and will see it done.
During my visits in the hospital she mentioned, that she has a positive outlook. "Things will get better from here". "I could walk for 5 minutes today; that's better then yesterday". "Others here have it worse. I'm happy that I can speak, read and eat on my own".
Back home, after another week of rest, she already started on β what I call β her recovery plan. She can't be restless, so she started doing some short daily walks. From her front door to the end of the street and back again. Everyday a couple of steps further.2
After another week, she already felt fit enough for a walk around the neighborhood.
8 weeks later, when it's not raining or snowing, she now goes on one of her regular daily bike rides around the outskirts of her village. 8 weeks later! A similarly aged gentleman in my local gym was in the hospital for 10 weeks after he got a sepsis. And my mum is already cycling and walking around again.
This rapid recovery phenomenon is something my mother, my father and I have in common.
I barely get sick, but when I do, I recover from it quite quickly (like in a single day)3. Same with my parents.
I always wondered if this is just lucky genetic accident, and if there's like a scientific program researching this. Would be more than happy to donate my "recover quickly"-trait/gene to humanity.
But what does this all have to do with a "wake-up call"?
This year I felt unusually tense. The pressure at work was constantly high. I didn't take the vacation days I wanted to. I felt burnt out from my open-source work.
I felt trapped in my day-to-day loop of "sleep β gym β work β eat β sleep".
This health-scare by my mother reminded me that life isn't all about work. Life is also not about internet fame by being funny on Twitter or creating a cool project on GitHub.
Life is about being with loved ones and enjoying the time we have on this floating rock in space. Explore new places. Meet new people. Eat new food. Do something nice for a stranger.
In 2023, I prioritized spending more time with friends than I did in 2022. I definitely want to continue that in 2024.
To bring this rambiling blog post to an end, I want to tell you this:
Hug your loved ones. Spend time with them. Tell them β or force them β to drink enough water.
If you haven't talked to them in a while, call them right now and ask them how their day or week went.
You never know if this will be the last time you talk to them.
I either added separate columns for each setting to the respective database table (e.g., a timezone
or date_format
column to the users
table), created separate database tables to hold all the settings (e.g., a user_settings
table with user_id
, timezone
, and date_format
columns), or added a generic settings
JSON column to my users
table to store the settings.
Each method has its advantages and disadvantages. Adding separate columns is great if you need to query by a specific setting (e.g., "which users have selected a custom date format like YYYY-MM-DD
?"). On the other hand, it can be excessive to add a new column just for a small setting, especially if your table has millions or billions of rows.
Separating the settings columns from the core table can be better for database performance. Your database doesn't have to load all settings columns all the time, which is useful when you need a list of users or projects in your database.
Adding a catch-all settings
column is the easiest approach but can lead to performance issues and different structures between rows. It raises questions about how your app should react if an expected setting does not exist on a user. Do you have to check for the existence of the JSON key everywhere you check for the setting (e.g., $user->settings?->my_custom_setting
)?
My solution to all of these problems is the spatie/laravel-data
package developed by the Spatie team.
The primary use case of the laravel-data
package is to create strongly typed data objects in your Laravel projects. Here's an example of such a data object:
use Spatie\LaravelData\Data;
class SongData extends Data
{
public function __construct(
public string $title,
public Artist $artist,
) {
}
}
The package also supports Eloquent Casting, which means a data object can be saved to your database and, when retrieved, cast back into a strongly typed data object instance.
The combination of strong types and Eloquent casting inspired me to use this package to store settings in my apps.
Here is a basic example of a UserSettings
object I might add to any of my projects.
<?php
namespace App\Data;
use App\Domain\Support\Enums\ThemeApperance;
use Spatie\LaravelData\Data;
class UserSettings extends Data
{
public function __construct(
public string $timezone = 'UTC',
public string $locale = 'en',
public string $date_format = 'YYYY-MM-DD',
public ThemeApperance $apperance = ThemeApperance::AUTO,
) {
//
}
}
In these settings we store the timezone, locale and preferred date format and theme apperance of a user.
After creating a migration that adds a settings
column to my users
-table, I update the User
-model so that settings
is cast to a UserSettings
instance.
use App\Data\UserSettings;
/**
* The attributes that should be cast.
*
* @var array<string, string>
*/
protected $casts = [
'settings' => UserSettings::class . ':default',
];
Now, thanks to laravel-data
, whenever I access $user->settings
I always get an instance of UserSettings
with all properties that I've declared in the PHP class.
If a user signed up two years ago and the app didn't support $date_format
, the value on that user would just fall back to the default value I declared in the UserSettings
-class.
The next time that user updates their settings, their outdated database state gets updated. No longer supported settings will be removed as well.
If you can't wait until your users update their settings, you can create an Artisan command that does this for you.
use App\Data\UserSettings;
use App\Models\User;
Artisan::command('app:update-user-settings', function () {
// Get all Users and update their settings
User::query()
->each(function (User $user) {
// Update settings to the newest format
$user->settings = UserSettings::from($user->settings);
$user->save();
});
});
The command loops over all users in your database and updates the settings
-column with a fresh version. The current settings of a user (eg., their selection for $date_format
) will be migrated.
At the beginning, I mentioned that adding a catch-all settings
column might be a bad idea if you don't have a structured way of storing settings. If different parts of your app add new keys to settings, it can become messy quickly. By using laravel-data, this problem goes away. There is one single source of truth for settings.
You can make settings strongly typed by using type hints and Enums. You could even create a nested structure of settings.
Imagine a UserSettings
-class that includes a UserGeneralSettings
, UserNotificationSettings
and UserApperanceSettings
.
<?php
namespace App\Data;
use App\Domain\Support\Enums\ThemeApperance;
use Spatie\LaravelData\Data;
class UserSettings extends Data
{
public function __construct(
public UserGeneralSettings $general,
public UserNotificationSettings $notification,
public UserApperanceSettings $apperance,
) {
//
}
}
One thing to keep in mind is that querying for specific settings can lead to performance issues and should probably be avoided.
If your app regularly needs to query for users who have selected a particular date_format
, it's better to promote this setting to its own column. This makes the work of your database and possible indexing much easier. 1
Going forward, I will use this approach for all cases where I need the concept of settings in new and existing apps.
I believe that using spatie/laravel-data
and Eloquent casting is better than just putting your settings into a generic $settings
array. I encourage you to give it a try in your next project.
Do you like this approach? Do you think this is a bad idea? Let me know!
MySQL has support to index values inside a JSON column, but I would still promote the setting to its own column if I regularly query by it. β©
Everyone does it differently. Some put all those PDFs, receipts, images and documents they get on their desktop or in the "Downloads" folder on their computer. Others add them in a "Documents" folder on a cloud storage provider like Dropbox, Google Drive or Microsoft OneDrive.
Some put everything in a single folder. Others try to bring organisation to the chaos by adding a "personal", "finances" or "travel" folder.
In this post I would like to share my personal folder structure I use and have refined over the last 5 years.
I discovered the Johnny Decimal-system (JD) many years ago and fell in love with it. It gave me the constraints needed to bring order to my chaos.
The basic rule of JD is to have up to 10 areas and each area has up to 10 categories. You're not allowed to create subfolders.
If applied correctly, this means you have a folder structure that is at most two levels deep. (I make a few exceptions for year-folders as you will see in the example below)
These constraints helped me to move away from a single folder containing all the invoices from the last 10 years and away from a deeply nested folder structure for the old software I collect (Software β Windows β Productivity β Office β β¦).
I first applied JD to my files and folders stored on my NAS in 2018 and wrote about it in 2020.
Since then, a couple of things changed.
In 2022 I revised the entire structure and consolidated all my files in iCloud Drive. I don't maintain a different structure on Google Drive or on my NAS. Everything is in iCloud Drive now.2
It took a couple of weeks to find the right shape when I revised my folder structure in 2022.
In the years before, I noticed that I got sloppy and added folders where none should exists or stopped sorting files as I just didn't know where to put them.
Once agreed that I need to change something, I took pen and paper and wrote down different ideas of the areas and categories I needed.
I salvaged a couple of areas and from my existing NAS structure, but redefined all the categories. I simplified. Merged similar sounding categories with each other. Cut those that I barely ever used. Freed up number spaces.
Below is the entire folder structure as a tree on which I settled on and which I've been using since late 2022.
I've redacted some names and omited some private folders; but I think you get the gist. (I will explain some of the areas and categories at the end of this list. Scroll on if you're interested.)
βββ 00-00 INBOX
βββ 10-19 Personal-Documents
β βββ 11 Personal
β β βββ 11.01 Identification
β β βββ 11.02 Correspondences (by year)
β β βββ 11.03 Civil Defense Schaffhausen
β β βββ 11.04 Civil Defense ZΓΌrich
β β βββ 11.05 <Car A>
β β βββ 11.06 Specialized Crux E5 Bike
β β βββ 11.07 Emergency Plan
β βββ 12 Health
β β βββ 12.01 Medical History
β β βββ 12.02 Fitness Guides
β βββ 13 Contracts
β β βββ 13.01 <Mobile Carrier>
β β βββ 13.02 <Car Sharing Provider>
β β βββ 13.03 <Home Internet Provider>
β βββ 14 Travels
β β βββ 14.01 Amsterdam 2024
β β βββ 14.02 Greece 2024
β βββ 15 Insurances
β β βββ 15.01 Old-age and Survivors Insurance
β β βββ 15.02 <Insurer> Health Insurance
β β βββ 15.03 <Insurer> Home Insurance
β β βββ 15.04 <Insurer> Car Insurance
β β βββ 15.05 <Insurer> Travel Insurance
β β βββ 15.06 Apple Care
β βββ 16 Housing
β βββ 16.01 <Current City>
β βββ 16.01.01 Correspondence <Landlord>
β β βββ 2022
β β βββ 2023
β βββ 16.01.02 Energy Bills
β β βββ 2022
β β βββ 2023
β βββ 16.01.03 City Correspondence
β βββ 16.01.04 Rental deposit
βββ 20-29 Finances
β βββ 21 Banking & Investments
β β βββ 21.01 <Bank A> Cash Account
β β βββ 21.02 <Bank A> Savings Account
β β βββ 21.03 <Bank B> Savings Account
β β βββ 21.04 Pension Account
β β βββ 21.05 <Bank C> (Closed)
β β βββ 21.08 <Credit Card A>
β β βββ 21.09 <Investment Portfolio A>
β β βββ 21.10 Book Keeping
β β βββ 21.12 GitHub Sponsors
β βββ 22 Donations (by Year)
β β βββ 2022
β β βββ 2023
β βββ 23 Taxes (by Year)
β β βββ 23.01 Zurich 2022
β β βββ 23.02 Zurich 2023
β βββ 24 Purchases (Bills)
β βββ 2022
β βββ 2023
βββ 30-39 Education
β βββ 31 Primary School
β β βββ 31.01 Marks
β β βββ 31.02 5th Level
β β βββ 31.03 6th Level
β βββ 32 Secondary School
β β βββ 32.01 Marks
β β βββ 32.02 7th Level
β β βββ 32.03 8th Level
β β βββ 32.04 9th Level
β βββ 34 Apprenticeship Information Scientist
β β βββ 34.01 Marks
β β βββ 34.02 IPA
β β βββ 34.03 School
β β βββ 34.04 <Company>
β βββ 35 BMS
β β βββ 35.01 Marks
β β βββ 35.02 Documents
β β βββ 35.03 Biology
β β βββ 35.04 German
β β βββ 35.05 English
β β βββ 35.06 French
β β βββ 35.07 History
β β βββ 35.08 Math
β β βββ 35.09 Economics
β β βββ 35.10 Chemistry
β β βββ 35.11 Physics
β βββ 36 Courses
β βββ 36.01 Refactoring to Collections (2016)
β βββ 36.02 Advanced Vue (2018)
β βββ 36.03 β¦
βββ 40-49 Professional Work
β βββ 41 ResumΓ©
β β βββ 41.01 Archive
β β βββ 41.02 Inspiration
β βββ 42 <Company A> (20xx-20xx)
β βββ 43 <Company B> (20xx-20xx)
β βββ 44 <Company C> (20xx-20xx)
β βββ 45 <Company D> (20xx)
β βββ 45.01 Contracts
β βββ 45.02 Salary Slips
βββ 50-59 Media
β βββ 51 Books & Papers
β β βββ 51.01 E-Book Library
β β βββ 51.02 Quotes and Highlights
β β βββ 51.03 Papers
β β βββ 51.04 Manuals
β βββ 52 Software
β β βββ 52.01 Apps
β β βββ 52.02 Games
β β βββ 52.03 Magzine DVDs
β βββ 53 Audio
β β βββ 52.01 Sounds
β β βββ 52.02 Music Library
β βββ 54 Videos
β β βββ 54.01 1 Second Everyday
β β βββ 54.02 TV Shows
β β βββ 54.03 Movies
β β βββ 54.04 Digitized DVDs
β β βββ 54.05 Digitized VHS
β βββ 55 Images
β β βββ 55.01 Wallpapers
β β βββ 55.02 App Icons
β β βββ 55.03 Terminal Inspiration
β β βββ 55.04 Screenshots
β βββ 56 Photos
β β βββ 56.01 Profile Pictures
β β βββ 56.02 Digitized Photo Albums
β β βββ 56.02 Headers
β βββ 57 Eagle Libraries
β βββ Architecture.library
β βββ Eagle Auto Import
β βββ Fashion.library
β βββ Inspiration.library
β βββ Memes.library
βββ 60-69 Writing
β βββ 61 Notes
β βββ 62 Recipes
β β βββ 62.01 Snacks
β β βββ 62.02 Dessert
β β βββ 62.03 Soup
β β βββ 62.04 Bread
β βββ 63 stefanzweifel.dev
β β βββ 63.01 Drafts
β β βββ 63.02 Archive
β βββ 64 Journal
β β βββ 2022
β β βββ 2023
βββ 70-79 Projects
β βββ 71 Projects
β β βββ 71.01 <Project A>
β β βββ 71.01 <Project B>
β β βββ 71.01 <Project C>
β βββ 72 stefanzweifel.dev
β β βββ 72.01 Design
β β βββ 72.02 Backup
β βββ 73 screeenly
β βββ 73.01 Accounting
β βββ 73.02 Design
β βββ 73.03 Feedback
β βββ 73.04 Pricing Decision
β βββ 73.05 Backups
βββ 80-89 ~NOT IN USE~
βββ 90-99 Archives
βββ 91 Backups
β βββ 91.01 Hazel Rules
β βββ 91.02 PiHole Settings
β βββ 91.03 Photo Library
βββ 92 Archive
β βββ 92.01 <Photo Book A>
β βββ 92.02 <Photo Book B>
β βββ 92.03 Google <Account A>
β βββ 92.06 Reddit <Account A>
β βββ 92.07 mastodon.social Account
βββ 93 Keepsakes
Phew! What a list, right?
As the tree-list alone is a bit overwhelming and maybe not as expressive, I would like to highlight some areas and categories and explain what I put in them.
(My structure is by no means perfect and while writing this posts I noticed some flaws or duplicate folders I would like to correct in the future.)
What I noticed the most when I revised my structure, is that I needed to embrace the 99 available folders in a category more.
Previously, I tried too hard to come up with good area names and then put only 2-3 sub folders in each area. (So much wasted number space)
The folder I struggled most with. What are "personal" files even?
Right now it's basically a catch-all area that contains all documents related to me. Random letters I received over the years and I want to keep, documents related to my civil defence service, documents about cars or bikes I own or owned. Legal documents regarding apartments or insurances.
A better name for "40-49 Professional Work" would be "Career" or "Employment". (When you're reading this blog post, I probably have already updated the folder name).
"40-49" contains my CV and all files related to my employments over the last few years. I named it "Professional Work", as I wanted to distinguish between "work I'm being paid for" and "work I do in my free time".
I thought "Professional Work" would contain maybe popular ongoing open source projects or paid side-projects.
"Projects" would contain one-off projects like making a photo book for a family member or a poster print.
However, in the last 5 years I didn't pick up freelancing or did any other work outside of my employment. Files related to projects still all ended up in "70-79 Projects".
That's why I will rename 40-49 to "Career" and make the structure simpler again.
As the name suggests, this folder contains all my writing. Drafts or finished blog posts, letters, various cooking recipes I developed over time, journals and a β currently β empty notes folder.
I switched to using Obsidian for my notes a couple of months ago, but haven't migrated my vault to this structure yet.
Moving my notes out of a ~/iCloud/Obsidian
folder prevents me from ever using Obsidian on my phone or tablet, but I never really needed that anyway.
If I need to write something down, I can create a note in iA Writer or create a task in Things. If I need to read a note, I can search my entire vault through iA Writer or use on-device search.
That's the joy of using plain text for your notes. You can read the content anywhere.
Probably the biggest change to my previous setup is the stronger distinction between what belongs into the "Archive" and "Backup" folder.
"Archive" contains data that can't be imported into other systems or files I otherwise want to keep.
For example PDF archives of essays or news paper articles I like; or account exports from various software services (Reddit, Spotify, etc.)
"Backup" on the other hand, contains data that I could import into software or that I could use to restore software into a particular state.
For example database backups for certain side projects; my entire photo library grouped by year, so that I could re-create a library in a new app; configuration files for software that don't have a sync service or can't be configured through dotfiles.
I use JD for a couple of years now and will use it for the forseeable future.
Migrating all my files to iCloud Drive was also a great move. It's such a pleasent surprise when I can say during a conversation: "Give me a sec. That trip should be in 14.Γ." and a few seconds later I can show them a travel inquiry for a trip I made years ago.
The only negative aspect of my setup is that I can't easily create a backup of my iCloud Drive.
There is a "Download Now" feature available in Finder, but iCloud will intelligently remove the downloads after a couple of days or weeks to free up space. I also can't got to to icloud.com and download an archive of a specific folder nor can I use a HTTP API to generate a zip-archive.3
For now I create weekly incremental backups of the most important files using BorgBackup and store them in two different physical locations.4
If I could spark an interest in Johnny Decimal check out their website.
If you want to learn even more, I can recommend purchasing their workbook which goes into even more detail and guides you through the whole process of setting up a new folder structure.
If you have any questions or suggestions, please let me know!
https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z β©
I do have encrypted backups of all important files in a couple of places. But that's for another blog post. β©
I still have hope that Apple will one day make such a feature available, but I also understand that features like "Advanced Data Protection" don't make that easy. β©
I think my whole backup solution using Borg might also be something I should blog about. β©
The paths should be mapped 1:1. So http://example.com/path/to/site
would be redirected to http://new-example.com/path/to/site
.
To ensure that certbot
on Laravel Forge can still issue a valid SSL certificate for the now deprecated example.com
domain, I needed to make sure that requests to /.well-known/acme-challenge/
are not redirected.
After tinkering with various ideas I landed on the following solution.
location / {
# Check if request is used to generate Let's Encrypt SSL Certificate
if ($request_uri !~ ^/.well-known/acme-challenge/) {
# Redirect to the new domain including query parameters
return 301 https://example.com$request_uri;
}
# Create alias to /home/forge/.letsencrypt directory.
# Taken from /etc/nginx/forge-conf/example.com/server/letsencrypt_challenge.conf
auth_basic off;
allow all;
alias /home/forge/.letsencrypt;
}
Replace example.com
with the domain you would like to redirect to and then replace the existing location / { try_files index.php }
block with the snippet above in your Laravel Forge NGINX configuration.
spatie/laravel-backup
.
This post goes into detail what the package provides, how it came to be and how it can help you to make sure you always have healthy backups.
As mentioned, the package helps restore backups created with spatie/laravel-backup
.
I use Spatie's package in a lot of my projects. It gives me the confidence that, in case of a fuck-up by me or a server issue, I have access to a somewhat up to date backup of my production database.
We also use Spatie's package at work. In one of those work projects, I once created a custom Restore
-command, that would download the latest backup, decrypt and decompress the zip file and import the MySQL dump to my local database.
We primarily used this command, to help us debug production issues. Sometimes we needed a snapshot of the production database to debug certain parts of the app. Creating a new snapshot and downloading the backup always took quite a lot of time, due to the size of the database. Reusing the already existing daily backup was the obvious solution.
The now released package was inspired by my original Restore
-command.
Fast forward to last December.
Our server provider informed us, that the operating system of a managed server needs to be updated; including upgrading to MySQL 8. "Sure, no problem" we said.
We've prepared our services and apps for a short downtime, created fresh backups for some of the apps hosted on that server and gave the server provider the go for update.
Half an hour later we were informed, that there's an issue with the MySQL upgrade and that downtime will take bit longer. The issue got resolved and MySQL was running again.
Before disabling the maintenance mode in our apps, we noticed that the database of one app was empty.
Turns out that the MySQL backup made by the server provider failed for that database and we forgot to create a fresh backup for that specific app.
Luckily the server provider created a snapshot of the state of the server, before running the upgrade. They sent us a MySQL dump of that snapshot and we were back in business.
It wasn't the end of the world, but showed me once again, how important it is to check your backups regularly.
That's how this new package was born. I made it my goal to solve this "check backup integrity"-problem once and for all.
After you've installed laravel-backup-restore
a new backup:restore
-command is available in your Laravel project.
If you just run php artisan backup:restore
, the command will become interactive and will ask for some input.
Which backup to restore, the decryption password and a last confirmation to start the restore process.
The restore process can also be automated by passing the --no-interaction
option to the command. Laravel will then use any of the provided options or their default values for the restore process.
php artisan backup:restore --backup=latest --no-interaction
As explained earlier, the package will download the selected backup to your machine, decrypt and decompress it and then import the database dump into your local database. The package currently supports MySQL, PostgreSQL and SQLite.
File backups are not restored. The command would β for example β not replace your local storage/app
directory with the folder stored in the backup.
The final feature I would like to highlight in this post are health checks. As explained earlier, my goal with this package is to solve the "backup integrity" problem I encountered at work.
To ensure that backups are okay, I needed a way to check if they are healthy. Health checks solve this problem by allowing users of the package to write their own "health check"-logic in simple PHP classes.
The package ships with a DatabaseHasTables
health check to ensure, that there is at least one database table present after the backup has been restored.
Writing your own health check is super simple. Create a new class that extends Wnx\LaravelBackupRestore\HealthChecks\HealthCheck
and implement the run
-method.
The example below checks, that after the database has been restored, there exists at least one Sale
-model that has been created yesterday.
namespace App\HealthChecks;
use Wnx\LaravelBackupRestore\PendingRestore;
use Wnx\LaravelBackupRestore\HealthChecks\HealthCheck;
class MyCustomHealthCheck extends HealthCheck
{
public function run(PendingRestore $pendingRestore): Result
{
$result = Result::make($this);
// We assume that your app generates sales every day.
// This check ensures that the database contains sales from yesterday.
$newSales = \App\Models\Sale::query()
->whereBetween('created_at', [
now()->subDay()->startOfDay(),
now()->subDay()->endOfDay()
])
->exists();
// If no sales were created yesterday, we consider the restore as failed.
if ($newSales === false) {
return $result->failed('Database contains no sales from yesterday.');
}
return $result->ok();
}
}
This check would be useful, if you create daily backups and want to check the backup integrity on a daily basis too.
If you know with a 100% guarantee that new sales are made everyday, this is an easy way to check, that the backup contains the expected data.
But running the restore process manually is cumbersome. Why not automate it?
By using GitHub Action's schedule
-trigger we can create a workflow that runs the backup:restore
command in regular intervals.
The workflow below can be triggered manually or runs on the first day of each month automatically.
The restore command will restore a backup stored on AWS S3 and wipe the database, once the restore command has run through successfully. We pass some secret environment variables to the command, to ensure it can connect to AWS.
name: Validate Backup Integrity
on:
# Allow triggering this workflow manually through the GitHub UI.
workflow_dispatch:
schedule:
# Run workflow automatically on the first day of each month at 14:00 UTC
# https://crontab.guru/#0_14_1_*_*
- cron: "0 14 1 * *"
jobs:
restore-backup:
name: Restore backup
runs-on: ubuntu-latest
services:
# Start MySQL and create an empty "laravel"-database
mysql:
image: mysql:latest
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: laravel
ports:
- 3306:3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup PHP
uses: shivammathur/setup-php@v2
with:
php-version: 8.2
- uses: ramsey/composer-install@v2
- run: cp .env.example .env
- run: php artisan key:generate
# Download latest backup and restore it to the "laravel"-database.
# By default the command checks, if the database contains any tables after the restore.
# You can write your own Health Checks to extend this feature.
- name: Restore Backup
run: php artisan backup:restore --backup=latest --no-interaction
env:
APP_NAME: 'Laravel'
DB_PASSWORD: 'password'
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
AWS_BACKUP_BUCKET: ${{ secrets.AWS_BACKUP_BUCKET }}
BACKUP_ARCHIVE_PASSWORD: ${{ secrets.BACKUP_ARCHIVE_PASSWORD }}
# Wipe database after the backup has been restored.
- name: Wipe Database
run: php artisan db:wipe --no-interaction
env:
DB_PASSWORD: 'password'
(More details on how this workflow works can be found in the GitHub repository.)
If the restore command β including any defined health checks β fail, the entire workflow will fail. GitHub will send a notification if this is the case or you could add additional steps to the workflow, to β for example β send you a Slack message if the restore failed.
Like many of my packages, I think v1 of this package is feature complete. I currently can't think of any new features I could add that would make the package better.
If you have any feedback or suggestions, please leave them on the GitHub repository.
And if you don't plan to use this package, I would encourage you to at least test your backups now and more regularly in the future.
Your future self will thank you.