Background
For long I ran a WordPress site. Over time I realized that it would never become as fast as my basically static content deserves. Another problem was that if I tweaked anything, I couldn't to do a "diff" later, to see what had actually changed. It was also difficult to get back to a previous state, unless I did full-website backups all the time. I realized that I needed to start from scratch with a simple concept: HTML/PHP and CSS - no database. I did shortly consider not even using PHP, but the fact that I wanted common content across pages, and that HTML does not support include files was enough to put me off. Later I learned to appreciate PHP, although I don't use it heavily.
In this log I focus on the overall stuff - and the details that have made an impression on me. The rest can be seen in any browser using "View page source" or the debugger in e.g., Chrome or Firefox.
2025-07-30 - Starting Up
In the worldwide DNS (Domain Name System), I used to have an old redirect from my very old klauselk.dk domain to my official website with domain name klauselk.com. I started by deleting this redirect (took less than half an hour to come in effect), as I don't think the old name is used anymore. This freed up the dk-domain for fun and games. I then copied the very few pre-created standard files (index.php, header.php, footer.php and style.css) from my PC to one.com using the one.com file-manager. I also barred ftp-access and opened up for sftp access which I will use shortly. In contrast to my WP-site, I will be working with relative paths here. I will keep a synchonized folder-structure on my PC. The folder structure is 1:1 seen as URLs on the website. This is different from WP where a "slug" is used. This is a name not related to the file-hierarchy.
Now I created a file in the root of my Apache server called ".htaccess" (make sure that this cannot be accessed via the browser) with the following content:
#Rewrite everything to https
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
The RewriteCond is like an "if" expression, and the RewriteRule is the action part of the if. This assures that http is not legal - only https. HTTPS is the secure version - showing up as a padlock in browsers. Obviously the website look is not fantastic, but the whole infrastructure is there:
- DNS - Routing visitors to the server when the URL is "klauselk.dk"
- HTTPS - Secure datagrams between a browser and the server - assuring that the above URL becomes "https://klauselk.dk". (A check later showed that the server allowed secure http2 and http3 - all good.)
- HTML - Basic formatting of text, background etc.
- PHP - Header and Footer
- CSS - Assuring (very minimal) consistent formatting
- Structure - Relative paths to files as on PC
Since the whole idea of this new website is to do things lean and mean, I immediately did a debug in Chrome - using F12 to invoke Chrome DevTools. As expected, I found the actual page and a cascading style sheet - but also something unexpected.
Where did the mysterious "tag_assistant_api_bin.js" come from? As indicated in the figure, it was requested from a Chrome-extension. I tested in Chromes "incognito" mode - and here it was gone. This means that working with Google tags earlier has led to me installing a Google-Tag-Manager extension on my PC for debugging. In other words, my visitors are not burdened with this overhead.
Compared to the WordPress site I still lack a lot of functionality:
- Menu with navigation and search
- Theme picture for all pages
- More fonts
- Responsiveness - handling both desktops and mobiles.
- Logo Elk on pages as well as next to site URL
- Sitemap.xml and connection to Google Analytics. This must wait until I am ready to transition for real.
- Performance cache utilization.
Much of the above will happen via CSS. Behind the scenes I also need to streamline VS Code and use Secure FTP. I have used the free FileZilla for Secure FTP before and as it worked great, I also use it now. Here are some hints:
Make sure to use "Synchronized Browsing".
This means that you have a "root" folder locally that has a matching folder on the WebServer. When you turn on Synchronized Browsing, a change in one "side" also changes on the other side. If you forget this, you will soon find yourself uploading files to the wrong folderEnable "Directory Comparison".
This colors files that have different timestamp or size (I normally use size). This way it's not so easy to forget to upload a changed file. Note that this requires you to refresh files and folder with the button for this or CTRL-F5.Use the "Site Manager" to once-and-for-all configure your sites.
I keep switching between XAMPP for tests (remember to switch on the FileZilla server in XAMPP) and my live site, so this saves me a lot of time.
2025-08-1 - Cascading Style Sheets and Pictures
Using CSS right is extremly important. It decides the whole layout. Done right it's almost magical how much the look & feel can change with almost no effort. Done wrong it will be a source of eternal agony. I have very little experience with CSS and needed to learn. Along the way I also structured my use of include files - for header, footer and now also menu. This is probably the main reason for me to use php, as html does not support include files. Finally, I managed to have navigation as a left sidebar, and an empty space on the right side for future use (not included in screendump below).
I want to try the left sidebar concept, as it allows for longer menus. I will probably also need menus in footer and/or header. Vertical menus can be considered old-style, but they are actually coming into fashion again, because they allow you to include all your links - it just grows downwards - and it also works on mobiles.
I created a style for images that treats them as blocks instead of inline-text. This assured that they don't dance around, and can be e.g. left-aligned. There were problems with images - not keeping their aspect ratio (looking drawn-out). This turned out be an issue with scaling. I had given width and height for the pictures - divided by 2 to fit decently. This ought to be easy for the browser to fit, but apparently this is not the way to keep aspect ratio. A solution was to only give one dimension, and let the browser handle the other. Still, large pictures may get cut-off. I knew images are hard - and as can be seen later in this log, I eventually got completely rid of the "size" issue, by not specifying size at all.
My first stab on a site-icon: Rename a 512x512 PNG to "favicon.ico" and copy it to the root of the site. This works in some browsers - but not all. In the literature all kinds of sizes are needed. I'll get back to that.
2025-08-2 - Git, more Graphics and real content with new fonts
I started with some house-cleaning - creating a github repo that my site now belongs to. Now that I am working with text-based files instead of e.g., MySQL or MariaDB, I can harvest the benefits of git. Here's a nice Cheat Sheet for git. Using GitHub created a problem, as I now have two accounts at github - one at work and one for this site. The basic git is designed for this, but authentication is a nuisance - see SSH and Git for a solution.
The next thing on my todo-list was one of the big ones - getting better performance with graphics. Looking at my WordPress installation it is clear that it has never really worked optimally.
When I write my books I always try to make diagrams etc vector-based - using e.g., Visio or Corel Draw. In the Latex books, these diagrams are included as PDF and thus stay vectorized. I don't know why, but I haven't done a similar effort in my website. It dawned on me that I could re-export the original drawings as SVG - which all browsers support. This means going back to Visio, CorelDraw and possibly Python's MatPlotLib and re-export graphics allowing me to only reference one small image-file in HTML.
To test this I decided to jump in, and recreate one of the more important pages in the WordPress website - my page on the Microcontroller book (MCU book in the menu). Sure enough, Visio could export to SVG. However, if pictures are included, the SVG explodes. This is the case for the book front-cover - whereas the back-cover has a MCU diagram, which works nicely. There was also one drawing where I had used some special techniques to make it look like 3D - here I needed to stay with PNG - or redo the figure a bit. Later I exported older CorelDraw figures to SVG, and this worked beautifully. Only issue is to NOT export text as letters, but as curves. This takes more space, but is also more robust and just works - and still the files are much smaller than before. The best thing is that a user can zoom as much as he or she pleases - it still looks great.
Back to the content. In WordPress you cannot find pages in HTML as these are kept within the database. Instead I simply copy-pasted the text from the "code" window in WP. And then I replaced figures with SVG-versions when possible.
Line-breaks/carriage-return was a challenge. In the old WordPress editor I can see and write in HTML (aka "code") - except for one thing. Line-breaks in the WP editor look like in any other editor and also becomes line-breaks when they run on the website. However, in order to make the source work in my new web-site I seemed to need to insert paragraph tags. I did consider taking my html from "View Source" in the browser. This gives the right paragraphs, but also a lot of hex-codes and the structure is messed up...so no. Note that later I introduced MarkDown as an intermediate edit step.
Just before I started this site, I introduced Roboto fonts on the WP-site. This was now copied here via CSS - Roboto for normal text and Roboto Slab for headers. Later I gave up on this, to have faster loading time. Note that in CSS you can still have Roboto as a preferred font - without forcing every user to download it. Often, it is already on the users PC - and if not, you can have an alternative font on the list.
2025-08-5 - Menu, Icons and more content
I now decided to subscribe to the free "fontawesome" icons, and put them to use in the menu. This was probably the hardest place to use them. It was a huge fight to persuade CSS to let the inner lists in the menus have a smaller font than the outer list. CSS allows you to select the outer elements without the inner - but then the inner elements inherit some styling - including fontsize - from the outer. Sigh. Please note, that I later found a better icon-solution based on Bootstrap.
I was a bit tired of my old WP menu which had headers that sometimes also were links - sometimes not. All items in the menu will now be links. There are however, still two levels in the menu. The menu is improving, but will need to become responsive.
Working hard, I now have all three books relatively in place - including sub-pages - and I am getting faster. The IoT book main page now includes the table of contents with figures that used to be a page of it's own. This is more like the two other books.
Performance is still great - even though I use higher resolution in some images. It helps that others are now in SVG.
Adding background to the header was a breeze. Likewise with the Elk footer image.
As the figure above shows, things are progressing. On a wide screen - like the screendump - the centered text is too wide. Still the left menu could be wider. On a phone, the menu should have bigger font, and probably come on top of the content.
2025-08-09 Responsiveness and SEO
- Added more content-pages
- More CSS - like code-tags, table-headers
- Inserted viewport statement inside head-tags - this is the real beginning of responsiveness
- Using "@media" to handle smaller screens differently - e.g. having three vertical blocks instead of horizontal
- Removed fixed sizes for figures - helped a lot on the small screens. Ironic, as it was a lot of work to put in
- Wrapped tables to allow wide tables to scroll, and assure a generic color-scheme
- Moved scripts for icons, fonts etc. from head-area to commonly included head.php and also added title and metadata as page-variables used by head.php. Checked with Chrome Inspect.
- Inserted google analytics tag in the new head.php - commented out as it targets the com-site and this is still dk.
2025-08-13 Line Endings
A lot of content was added. Sometimes when I had made small changes and prepared to commit to git, there appeared to be many changes that I had not created on purpose. These were marked with red "^M" at the end of lines in the diff. This was obviously line-endings toggling between Linux-style - LF - and Windows style - CR-LF. The web-server did not seem to care, but the right thing when using a Linux web-server is to stick to Linux line-endings.
Setting up VS-Code
In VS-Code, at the right side of the status line at the bottom, there should be a small "LF" and this was now "CRLF", after I had worked on other stuff. This was fixed with a workspace-file in the root folder of the project with:
"settings": {
"files.eol": "\n"
}
Setting up git
I needed to tell git to make sure that when stuff goes into git, it should "normalize" (line-endings). In my git bash shell - "standing" in the root folder - I wrote:
git config core.autocrlf
In .gitattributes:
* text=auto eol=lf
*.svg -text
*.png -text
*.jpg -text
Note especially how I tell git that SVG-files are NOT to be treated as text (the "-"). In other words; git is NOT allowed to change line-endings on these types of files. It is probably not needed for png and jpg files, but I learned the hard way that if I do NOT include "* svg -text", my svg-files become unusable. Now I needed - in bash - to first commit the updated .gitattributes - then "renormalize" the git repo based on its contents.
git add .gittattributes
git commit -m"LF style"
git add --renormalize
git commit -m"Normalize line endings to LF"
Finally there were still files in my working tree that had CR-LF endings. This was fixed with the following bash command:
git rm --cached -r .
git reset --hard
It is important to use the git-supplied commands for all these actions - they are relatively safe. Before I did the above, I tried with a recursive "sed" command in a late evening session. The moment I had started it, I realized it would mess up my binary files - all images basically. "Well, they can be retracted from git" I thought. But it also corrupted the entire git-repo in my .git folder! Long live git's distributed repos - github was intact.
I tested generating a sitemap - using a free tool from Screaming Frog. I also created a basic "robots.txt" file.
2025-08-16 Going live
Having everything prepared to finally replace the WordPress site, this was my "todo" list:- Remove "test site" from header.
- Put content in index.php.
- Change footer from "dk" to "com".
- In .htaccess: uncomment prepared "301 redirects". Visitors will silently be redirected from old WP "slug" to new pages. Supposedly this concept is also understood by search engines.
- Uncomment Google gtag in head.php (using old existing Google ID).
- Backup .com WordPress site at provider.
- For a while - leave WP-specific folders on .com-site and copy the root folder to a backup folder.
- Upload to com-site.
- Fix any bugs (these were in .htaccess)
- Generate and upload new sitemap.xml - also to search.google.com. A bit of editing was needed here.
- Change DNS to redirect dk to com again.
- Push fixed code to git.
- In a day or two - verify in Google Analytics that the tag is working. Maybe add some new events...
- Start optimizing again.
2025-08-31 Search
I previously skipped some of my posts on "softer" subjects like Agile, Lean, DevOps and teamwork. Some of these are now pages.
When moving away from WordPress, I knew that I would miss the ability to allow visitors to search in my website. Under the WP-regime, I had used "Ivory Search", which did a good job. It could benefit from the fact that under WP all pages and posts are stored in a database. However, I have selected NOT to have a database here.
My new search-facility had to be simple and fast to use. All pages already had a "title" and a meta "description". These are used by a common "head.php" to pass on meta-data to relevant places. I thought that if I used these, and added a meta "keywords" in the same style, these three items could go into a json-file that could be used for searches. This was the basic principle. Not as good as a full free-text search - but fast and hopefully enough.
At first I built the json-file off-line with Python - extracting the text strings from the raw files. This was unfortunately inhibited by the fact that the title and description are processed in php before being served in the right format. Thus Python did not get the right text. So I decided to work on the served files. Python could do this, but moving the algorithm to a PHP-page meant that I did not have external tools - only my website. So it was down to one PHP-file to build the json - on my manual request at rare occasions - and another php to do the search.
With the above approach, I now needed a list of files to go through. Here the already existing sitemap.xml was an obvious choice. I got some strange surprises, as the sitemap obviously contains the URLs of the live site, and I was debugging at an XAMPP local site. This meant that I was trying to build the json-file on pages that did not yet contain the keywords, because I hadn't uploaded these to the live site yet. As soon as this was discovered, it was not a problem; the live site could easily contain the pages with descriptions and keywords, without my unfinished php and search UI. So now I was debugging PHP on my local site - parsing files from the live site, and showing search-results locally - with links always to the live site. A bit confusing, but it works. I also here uncovered some massive bugs in the sitemap. Apparently Screaming Frog had concatenated consecutive runs into one file. That explained (to a degree) why Google was complaining about non-canonical pages.
UI-wise I ended up with a small magnifier icon and a text-box on top of the left menu. Once the user presses Return, they are in a new page with the results as links. Here users may try new keywords. The font needs to be bigger here - that will soon be done.
I ran into a small problem with the json file containing text prepared for HTML - with the usual escapes. This would not match user searches and also meant that when the hits was presented "HTML-style" they were double-escaped. This was easily fixed by using "html_entity_decode" on strings going into the json-file.
Finally, I needed to build some security into the builder before it is allowed to run on the live site. In my .htaccess file I now set an environment variable with a password. The PHP-based index-builder kan only be activated when this password is used with the page - appending "?key=password" (obviously with a real password). This means that I don't need my XAMPP site to rebuild the index.
Unfortunately, I now spent a lot of time chasing bugs in the index-build - only to realize that I was served old pages to the php-builder. This is because my provider uses Varnish cache, and the builder uses the real pages via the sitemap, instead of the raw files (which was problematic in other ways as described above). Thus I had to disable the cache while building the index.
2025-09-04 Change of Icons
At first I ignored it, but it started to annoy me: Pages seemed to jump sideways when they were rendered by a browser. I suspected that the "Font Awesome" icons in the sidebar-menu, were drawn by the javascript - after the page was rendered once and after the javascript had fetched the icons. This was correct
I did consider various form of prefetch etc., but felt that I might struggle with this for long - and then a new browser generation would make it all start over. So I looked around and fell over Bootstrap - which is open source.
Bootstrap supports various ways to load their vectorized icons, and I got attracted to the "inlining", where the SVG source for the icons are pasted directly into the html/php. This means fewer files to manage when developing and fewer for the browser to fetch runtime - especially because all my menu-handling is already in one file. More importantly it is all HTML and there is no javascript running sooner or later than the rest.
The switch was fast and there is no flicker or jumping now. Yeeeha. As a bonus, the SVG icons even work on my ancient iPad's Safari browser, which the Font Awesome ones didn't.
CSS is still "a kind of magic" to quote Queen. As I narrowed a browser window, the columns in my flex display did not behave as I wanted them to. I think that my main take-away is to remember that "max-width: 100%" on a style is an important setting - assuring that the contents of an element is not allowed to grow beyond what the outer container has designated. That sounds like "this should be standard behaviour", but I guess this is where the "flex" comes in.
2025-09-06 Moving to WebP
It was clear from day 1 that the site would need to be able to deliver modern image-formats - mainly WebP. I decided that now was the time to get started. First I needed to do the actual conversion from png and jpg/jpeg. Here I used an old friend - ImageMagick. Windows is not so cool as Linux, when it comes to doing stuff recursively in subfolders, so in each of the few directories with images I did the following:
magick mogrify -format webp *.png
magick mogrify -format webp *.jpg
magick mogrify -format webp *.jpeg
There are also online tools that can do the above - but it will be even harder to do many files at once
Next thing was to persuade my webserver to serve the new WebP-files instead of the old filetypes - when supported by the given browser. This can be done by going into all pages and use the "picture" and "srcset" tags to give the browser the main WebP as well as the fallback png/jpg-file. It is however also possible to instead tell the webserver to check whether the browser claims support for "image/webp" in its http-response. If that is the case, the server can simply substitute the request for e.g. a png in the page with a corresponding request for WebP. The following is my .htaccess code for serving e.g., "sample.webp" when asked for "sample.png".
Options -MultiViews
# The following inside mod_rewrite.c
# Did the client claim support for webp in the http-accept?
# NB: [NC] = Non-Case sensitive
RewriteCond %{HTTP_ACCEPT} image/webp [NC]
# Did the client request jpeg or jpg or png?
RewriteCond %{REQUEST_FILENAME} ^(.+)\.(jpe?g|png)$ [NC]
# Do we have the corresponding webp file on stock?
RewriteCond %1.webp -f
# THEN serve webp instead
RewriteRule ^(.+)\.(jpe?g|png)$ $1.webp [T=image/webp,L,E=accept:1,NC]
# This inside mod_headers.c>
# Vary: Accept for all the requests to jpeg, png and gif
Header append Vary Accept env=REDIRECT_accept
# This inside mod_mime.c>
AddType image/webp .webp
The above process can be seen in e.g., Chrome's Devtools in the network tab. If the server sees that the browser is capable of handling WebP, it can deliver this when asked for a png/jpg. The figure below shows exactly how the page - sqlserver.php - contains references to png or jpg file. It also shows that this is what the browser then asks for - but it ends up with a WebP-file
Note how in the left column, a png- or jpg-file is requested - but the "type" column states "webp". You can single-click on the relevant line, and then select "Response" in the window on the right, and you will see a hex-view (or UTF-8) with the letters "WEBP" in the start of the file. In other words; it may look as if the browser asks for a png-file and receives it - but it does receive a WebP-file. A simple way to see what is actually served is to - in the normal browser view - right-click on an image and select "Save as...". If all is well, you will see that you are attempting to save a WebP-file.
Obviously things did not go as smooth as above. In the ImageMagick line I used "WebP" instead of "webp", which meant that the extension became "WebP". This worked fine on my XAMPP-server on my windows PC. It did however not work on the live site, because the script in htaccess looks for "webp" - and the live server runs Linux, and contrary to Windows, DO care about case. I decided to rename the files, but as they were now in git, I needed "git mv" together with git bash to work with wildcards. I had no luck with this. So I decided to - for each folder involved - use the old DOS command in a simple command-shell:
ren *.WebP *.webp
Interesting that this old DOS-command can support N:N sources and destinations in a rename. The plan was to let git recognise the change and then git add and commit and be done. However, on Windows git does not see a change in case as significant. So now I had to do the below command:
git config core.ignorecase false
Now git saw the change, and I could commit, and then immediately after set the ignorecase back to true. And - not to forget - upload the changes to the webserver. So this was my second experience in this project with the textual clashes between Linux, Windows and git. Sigh.
2025-09-13 More html and webP
I needed to flash my books already on the index-page. For this I needed the three covers to be nicely aligned when there is space - and above each others on e.g., phones. This time I created a style locally on the page - can always be moved to central CSS if used more. I got it to work - but then it messed up my footer image. So I needed to assure that the new local style was only used in the center part of the page. This required the introduction of a new class - I called it "custom-page". At the bottom of the head section - below the include of my stylesheet - there now was a lot of styles like:
.custom-page .row {
display: flex;
flex-wrap: nowrap;
flex-direction: row;
gap: 16px;
align-items: stretch; /* equal height columns */
}
...
.custom-page figcaption{
padding: 12px;
text-align: center;
background: initial;
}
...
@media screen and (max-width: 600px){
.custom-page .row { flex-direction: column; }
.custom-page figure { height: auto; }
.custom-page .image-wrap { height: auto;
.custom-page img { height: 100%; }
}
The media tag is what makes it work on a slim device. Now, inside the html, I used the "custom-page" class for a div surrounding the relevant paarts of the page:
<div class="custom-page">
<div class="row">
<div class="column">
<figure>
<div class="image-wrap">
<a href="/pages/microcontrollers.php">
<img src="/media/microcontrollers/Cover14_FrontLoRes.png" alt="MCU Book">
</a>
</div>
<figcaption>Microcontrollers with C</figcaption>
</figure>
</div>
.....
With that in place I changed and re-tested until I liked it. Looking at the performance in DevTools - F12 in Chrome - I noticed that some images were served as png or jpeg - not webp as expected. This was implemented not so long ago, and seemed now to only work for some - seemingly random - files. On the Network page in DevTools, I had checked "Disable Cache", the webp-files did exist at the server, names and casing matching, and .htacces did its rewrites - at least on some files. And the problematic files did not differ from the others in use of e.g., underscore - somthing that might interfere with the regular expressions in .htaccess. Nevertheless, on refresh - or CTRL-F5 - I was served non-webP files.
Long story short - when I pressed CTRL-SHIFT-F5 it worked. Until then, the browser was not caching new files, but was still serving previously cached files, from before I started the debug-session. And it was surprisingly difficult to see in DevTools that no files were retrieved at all.
The problem is that when RewriteRule is used in .htaccess, it is almost hidden what happens. When using the Network tab in DevTools it is important to understand the following:
- As stated earlier, the "Type" field states "webp" when WebP is fetched and e.g. "png" when this is fetched. This can be verified by looking at the "Response" sub-tab, where you can see the content of the file.
- The "Status" field says "200", when a file is served from the server, to the browser to you. The number 200 comes from the http response from the server. You may however also see something like "200 (from memory cache)" or "200 (from disk cache)" in older versions of DevTools. This does not originate from the servers response. This is DevTools telling that the file was cached in memory (from same session) or on disk (from previous session). Thus, the file is still "served" - but directly from the browser to you. In this case an old png-file may be served instead of a newer webp-file (but the Type field is correct).
- Probably because it was problematic to fake a "200" response in any way, newer versions of DevTools is using the "Size" field to tell you whether a file was cached. This makes a lot of sense, as there is no content bytes sent from the server, and thus size becomes less important.
2025-09-17
Added more content. Main adddition was the HTTP page. Here I play more with DevTools while investigating http. This includes toying with AJAX and XMLHttpRequest.
Google still complained about non-canonical pages. This was solved with the addition of a line setting "canonical_url" with a script in an include file. The root-problem is that "www.mysite.com/mypage.php/" targets the same page as versions without the "www", and with/without the terminating "/" - on top of the duality with https and http. I canonicalized the secure short versions - like "https://mysite.com/mypage.php"
2025-10-22
Again more content was added - now on AI in Embedded and a tools page on SSH and Git.
Although I still enjoy being completely in control with my new site - as well as its speed - I do sometimes miss WordPress when I write content. Writing native html sometimes inhibits my speed of writing. For this reason I have tried using MarkDown. It is just so much faster writing headers, bulleted lists, italic/bold and especially paragraphs.
Not that I want to store my pages in MarkDown and generate HTML dynamically - which is possible. No - for now I just try writing new pages - or large blocks of content - initially as MarkDown - then convert and finalize in html as usual. The first attempt was the new page on SSH and Git. The converter is "Markdown Preview Enhanced" - an extension for VS Code. It even allows you to see the result while writing the MarkDown.
It went quite OK, and I will try to do this for some time.