<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Remembering things by writing them down</title><link>https://blog.jpeach.org/</link><description>Recent content on Remembering things by writing them down</description><generator>Hugo -- gohugo.io</generator><language>en-US</language><copyright>Copyright &amp;copy; James Peach 2015-2023</copyright><lastBuildDate>Thu, 14 Dec 2023 15:23:36 +1100</lastBuildDate><atom:link href="https://blog.jpeach.org/index.xml" rel="self" type="application/rss+xml"/><item><title>Git LFS With Homebrew</title><link>https://blog.jpeach.org/posts/2023/12/git-lfs-with-homebrew/</link><pubDate>Thu, 14 Dec 2023 15:23:36 +1100</pubDate><guid>https://blog.jpeach.org/posts/2023/12/git-lfs-with-homebrew/</guid><description>So, I&amp;rsquo;ve been working on an internal Homebrew tap, and have been trying to get the automatic bottle packaging actions working. There&amp;rsquo;s a good blog post that describes the outcomes, but the Homebrew project supports internal taps on a best effort basis, so this doesn&amp;rsquo;t dig into all the issues you might encounter when setting it up internally.
By far the most time consuming issue I had was getting the git clone to find git-lfs.</description></item><item><title>Installing FreeBSD network drivers</title><link>https://blog.jpeach.org/posts/2023/11/installing-freebsd-network-drivers/</link><pubDate>Sat, 04 Nov 2023 12:47:25 +1100</pubDate><guid>https://blog.jpeach.org/posts/2023/11/installing-freebsd-network-drivers/</guid><description>So I decided to build a local NAS system to store music (since I&amp;rsquo;m worried my CDs are starting to degrade), and to do Time Machine backups (last backup 2018!). Despite never having used FreeBSD before, I am planning to go with FreeBSD for this since I want to use a native well-integrated ZFS implementation.
Anyway, the first issue I&amp;rsquo;ve hit is that I need RealTek network drivers, but they are not included in the installer kernel.</description></item><item><title>Recovering EC2 Instances With User Data</title><link>https://blog.jpeach.org/posts/2023/09/recovering-ec2-instances-with-user-data/</link><pubDate>Sun, 03 Sep 2023 14:04:53 +1000</pubDate><guid>https://blog.jpeach.org/posts/2023/09/recovering-ec2-instances-with-user-data/</guid><description>So, every now and then when I do something in AWS, I mess it up. One problem that I give myself every now and then is breaking SSH access to an EC2 instance. In that case, you need some sort of out-of-band access to go and fix the SSH configuration.
EC2 instance user data is generally set to be a shell script that cloud-init runs once when the instance is created.</description></item><item><title>Exporting Docker images from Dagger</title><link>https://blog.jpeach.org/posts/2023/08/exporting-docker-images-from-dagger/</link><pubDate>Fri, 11 Aug 2023 08:50:45 +1000</pubDate><guid>https://blog.jpeach.org/posts/2023/08/exporting-docker-images-from-dagger/</guid><description>I started using Dagger this week, and if you have any sort of build and test system based on shell scripts and Dockerfiles, Dagger will be a big improvement.
This post documents how to export a container image that you build in Dagger to your local Docker instance. This process is described in the Dagger documentation but I needed to go one step further and tag the image that I exported.</description></item><item><title>Using a Specifc SSH Identity with Git</title><link>https://blog.jpeach.org/posts/2023/07/using-a-specifc-ssh-identity-with-git/</link><pubDate>Thu, 06 Jul 2023 12:35:31 +1000</pubDate><guid>https://blog.jpeach.org/posts/2023/07/using-a-specifc-ssh-identity-with-git/</guid><description>Normally, it&amp;rsquo;s common to configure multiple SSH keys git Git access, but there are situations where you need to use a specific key. In this case, you don&amp;rsquo;t want to let ssh just choose the first working key, you want it to use a specific SSH identity. The use case that I had was that for a certain set of source repositories, I wanted to use a particular key because that specific key was approved by a particular GitHub organization.</description></item><item><title>Building Dreamcast programs With GCC spec files</title><link>https://blog.jpeach.org/posts/2023/05/building-dreamcast-programs-with-gcc-spec-files/</link><pubDate>Thu, 25 May 2023 13:53:13 +1000</pubDate><guid>https://blog.jpeach.org/posts/2023/05/building-dreamcast-programs-with-gcc-spec-files/</guid><description>In the KallistiOS ecosystem, there are basically two ways to build Dreamcast programs - use the CMake toolchain support, or use the compiler wrapper scripts from the KallistiOS source tree. I was interested in the latter, but they end up depending on a large number of environment variables, which are traditionally sourced through $KOS_BASE/environ.sh. Although it&amp;rsquo;s not really a big issue to have all those environment variables, I wondered whether there was a cleaner approach.</description></item><item><title>Homebrew tricks for the Dreamcast toolchain</title><link>https://blog.jpeach.org/posts/2023/05/homebrew-tricks-for-the-dreamcast-toolchain/</link><pubDate>Sun, 21 May 2023 11:08:46 +1000</pubDate><guid>https://blog.jpeach.org/posts/2023/05/homebrew-tricks-for-the-dreamcast-toolchain/</guid><description>I&amp;rsquo;ve spent some time working on a Homebrew tap for Dreamcast development tooling, and wanted to write a little about the tricks I used in creating formula to install the Dreamcast compilation toolchain.
First, the Dreamcast toolchain is actually build and installed by the dc-chain package from the KallistiOS repository. dc-chain has 3 phases - download, unpack and build. The download phase is done by the download.sh script, which downloads the source archives of the toolchain components (gcc, binutils, newlib, gdb) for SH4 and ARM architectures.</description></item><item><title>Dreamcast development setup</title><link>https://blog.jpeach.org/posts/2023/05/dreamcast-development-setup/</link><pubDate>Wed, 17 May 2023 09:29:23 +1000</pubDate><guid>https://blog.jpeach.org/posts/2023/05/dreamcast-development-setup/</guid><description>Way back in the last century, I bought a Sega Dreamcast. One of the reasons that I like it (apart from some great games), was that there was a burgeoning homebrew development scene for it. I went and bought the various bits of hardware I thought I needed to get going, but ended up never doing anything, and it all sat in storage for 20 years.
Recently, I unpacked everythying and decided to figure it all out.</description></item><item><title>Remember to set CONFIG_CFS_BANDWIDTH</title><link>https://blog.jpeach.org/posts/2019/10/remember-to-set-config_cfs_bandwidth/</link><pubDate>Tue, 29 Oct 2019 00:00:00 +1000</pubDate><guid>https://blog.jpeach.org/posts/2019/10/remember-to-set-config_cfs_bandwidth/</guid><description>I spent a while trying to debug a runc problem where it would always get an EACCES error writing the cpu.cfs_period_us file in a cpu cgroup.
The problem turned out to be that I had not enabled CONFIG_CFS_BANDWIDTH in my kernel build. Presumably, when runc tries to write the file, it passes O_CREAT and cgroupfs doesn’t let it create a new file, which leads to the somewhat surprising error.
So, if you get this error, just turn on CONFIG_CFS_BANDWIDTH :)</description></item><item><title>Being too clever merging protobufs</title><link>https://blog.jpeach.org/posts/2019/10/being-too-clever-merging-protobufs/</link><pubDate>Wed, 23 Oct 2019 00:00:00 +1000</pubDate><guid>https://blog.jpeach.org/posts/2019/10/being-too-clever-merging-protobufs/</guid><description>This is probably something lots of other people have tried and burnt themselves with, but anyway, this time it’s my turn.
The goal is, given an arbitrary protobuf, can we write an API that applies default values to it? Normally we would create a prototype protobuf object with the defaults and merge our current object into it, updating the prototype object with current values. However, in Go, this would erase the type information and mean we would have to do some ugly casting (it’s easy to avoid this in C++ by using templates).</description></item><item><title>Buildroot, Systemd, and Getty</title><link>https://blog.jpeach.org/posts/2019/09/buildroot-systemd-and-getty/</link><pubDate>Wed, 18 Sep 2019 16:38:13 +1000</pubDate><guid>https://blog.jpeach.org/posts/2019/09/buildroot-systemd-and-getty/</guid><description>I just spent a few hours trying to get systemd to spawn a getty on the vt console for a Linux image that I was building with Buildroot. It turns out that it helps to read the right documentation, which in this case was a blog about systemd console handling.
The money quote for me was:
In systemd, two template units are responsible for bringing up a login prompt on text consoles:</description></item><item><title>Setting the HTTP User Agent in Go</title><link>https://blog.jpeach.org/posts/2019/09/setting-the-http-user-agent-in-go/</link><pubDate>Tue, 10 Sep 2019 07:04:50 +1000</pubDate><guid>https://blog.jpeach.org/posts/2019/09/setting-the-http-user-agent-in-go/</guid><description>Here’s the smallest amount of code I could come up with to set the user agent when making a HTTP request in Go:
type UserAgent string func (u UserAgent) RoundTrip(r *http.Request) (*http.Response, error) { r.Header.Set(&amp;#34;User-Agent&amp;#34;, string(u)) return http.DefaultTransport.RoundTrip(r) } http.DefaultClient.Transport = UserAgent(&amp;#34;my-great-program&amp;#34;) Note that this isn&amp;rsquo;t really legal, since the RoundTripper is not supposed to modify the request.</description></item><item><title>Ccache and Bazel</title><link>https://blog.jpeach.org/posts/2019/09/ccache-and-bazel/</link><pubDate>Thu, 05 Sep 2019 19:18:22 +1000</pubDate><guid>https://blog.jpeach.org/posts/2019/09/ccache-and-bazel/</guid><description>Bazel defaults to building code in a sandbox that remounts most of the filesystem read-only. This means that if you are using ccache (Fedora, for example, will enable it by creating appropriate symlinks when you install the package) the compile job will fail because it can&amp;rsquo;t write to the cache directory.
The straightforward fix to this is to create a bazelrc file which specifies the Bazel sandbox_writable_path flag to make the cache directory writeable.</description></item><item><title>Using alternatives(8) to enable lld</title><link>https://blog.jpeach.org/posts/2019/08/using-alternatives8-to-enable-lld/</link><pubDate>Tue, 27 Aug 2019 19:20:01 +1000</pubDate><guid>https://blog.jpeach.org/posts/2019/08/using-alternatives8-to-enable-lld/</guid><description>In this post, I remember how to use the alternatives(8) mechanism to make clang&amp;rsquo;s lld linker the default.
First, tell alternatives that lld is available and set it at a high priority:
$ sudo alternatives --install /usr/bin/ld ld /usr/bin/lld 80 $ sudo alternatives --auto ld Then, just verify that it worked:
$ alternatives --display ld ld - status is auto. link currently points to /usr/bin/lld /usr/bin/ld.bfd - priority 50 /usr/bin/ld.gold - priority 30 /usr/bin/lld - priority 80 Current `best&amp;#39; version is /usr/bin/lld.</description></item><item><title>Mesos config for cquery</title><link>https://blog.jpeach.org/posts/2018/05/mesos-config-for-cquery/</link><pubDate>Sun, 27 May 2018 19:21:58 +1000</pubDate><guid>https://blog.jpeach.org/posts/2018/05/mesos-config-for-cquery/</guid><description>The canonical way to feed your project to cquery is to generate a compile_commands.json file, possibly using cmake. Every time I switch my Mesos work to the cmake build, I live to regret it, either because the component I&amp;rsquo;m working on isn&amp;rsquo;t implemented in the cmake build or I end up wanting to install the build (and that isn&amp;rsquo;t implemented in cmake).
So here&amp;rsquo;s a .cquery file that makes cquery work pretty well with Mesos:</description></item><item><title>Using cquery from vim with cscope-lsp</title><link>https://blog.jpeach.org/posts/2018/05/using-cquery-from-vim-with-cscope-lsp/</link><pubDate>Wed, 02 May 2018 19:24:42 +1000</pubDate><guid>https://blog.jpeach.org/posts/2018/05/using-cquery-from-vim-with-cscope-lsp/</guid><description>A while ago I tried cquery with Vim, but was a bit unsatisfied with the integration. I could probably have altered my key bindings to perform LSP searches instead of cscope searches, but I also really quite like the integrated tags stack you get with the built-in Vim cscope support.
So I thought that it couldn&amp;rsquo;t be too hard to implement the cscope line protocol with a cquery backend, and it turns out that it wasn&amp;rsquo;t.</description></item><item><title>Debugging libstdc++ strings</title><link>https://blog.jpeach.org/posts/2017/12/debugging-libstdc-strings/</link><pubDate>Fri, 15 Dec 2017 19:27:21 +1000</pubDate><guid>https://blog.jpeach.org/posts/2017/12/debugging-libstdc-strings/</guid><description>Writing this down quickly before I forget.
When debugging a std::string from GNU libstdc++, the debugger typically won&amp;rsquo;t show you the actual representation.
First, you need to turn off the pretty printer (assuming that it worked in the first place):
(gdb) p reregisterSlaveMessage.resource_version_uuid_.ptr_ $13 = (std::string *) 0x7f4d98d5e970 (gdb) p *reregisterSlaveMessage.resource_version_uuid_.ptr_ $14 = &amp;#34;\022\020K|\n\225\064\246CE\222\350\275\315t&amp;#34;, &amp;lt;incomplete sequence&amp;gt; (gdb) disable pretty-printer 2 printers disabled 0 of 2 printers enabled (gdb) p *reregisterSlaveMessage.resource_version_uuid_.ptr_ $15 = { static npos = &amp;lt;optimized out&amp;gt;, _M_dataplus = { &amp;lt;:allocator&amp;gt;&amp;gt; = { &amp;lt;:new_allocator&amp;gt;&amp;gt; = {&amp;lt;no data fields&amp;gt;}, &amp;lt;no data fields&amp;gt;}, members of std::basic_string&amp;lt;char std::char_traits&amp;gt;, std::allocator&amp;lt;char&amp;gt; &amp;gt;::_Alloc_hider: _M_p = 0x7f4d98d67068 &amp;#34;\022\020K|\n\225\064\246CE\222\350\275\315t&amp;#34;, &amp;lt;incomplete sequence&amp;gt; } } Next, you need to know that the internal structure of std::string is prepended to the actual string data, so you need to cast and subtract from the data pointer to find the length and refcount.</description></item><item><title>Native Prometheus Support in Mesos</title><link>https://blog.jpeach.org/posts/2017/10/native-prometheus-support-in-mesos/</link><pubDate>Tue, 17 Oct 2017 12:53:13 +1000</pubDate><guid>https://blog.jpeach.org/posts/2017/10/native-prometheus-support-in-mesos/</guid><description>I wrote a design document here retrospectively discussing and justifying my patch series that implements native Mesos support for a Prometheus metrics endpoint.</description></item><item><title>PAM support in the Mesos containerizer</title><link>https://blog.jpeach.org/posts/2017/10/pam-support-in-the-mesos-containerizer/</link><pubDate>Tue, 17 Oct 2017 12:50:53 +1000</pubDate><guid>https://blog.jpeach.org/posts/2017/10/pam-support-in-the-mesos-containerizer/</guid><description>Recently, it occurred to me that running a containerized task is concentually very similar to having a remote session on an anonymous compute agent. The traditional way for operators to influence (i.e. configure, control, log) the environment of a remote user session is by the use of PAM modules. One of the applications that I had in mind was the use of the pam_loginuid module to set the linux audit ID so that containers audit events can be attributed to the task user rather than to the container orchestrator.</description></item><item><title>Tracing rmdir system calls with SystemTap</title><link>https://blog.jpeach.org/posts/2017/09/tracing-rmdir-system-calls-with-systemtap/</link><pubDate>Mon, 25 Sep 2017 12:54:42 +1000</pubDate><guid>https://blog.jpeach.org/posts/2017/09/tracing-rmdir-system-calls-with-systemtap/</guid><description>I wanted to know who was removing the Mesos memory cgroups hierarchy and why, so I turned to SystemTap. Here&amp;rsquo;s my one-liner:
sudo stap \ -d /usr/lib/systemd/libsystemd-shared-233.so \ -d /usr/lib64/libc-2.25.so \ -d /usr/lib/systemd/systemd \ -e 'probe kernel.function(&amp;quot;sys_rmdir&amp;quot;) { printf(&amp;quot;%s(%s): %s\n&amp;quot;, execname(), pp(), user_string($pathname)); print_ubacktrace(); }' Note that you have to feed in the binaries you expect to see in order to get user stack traces.
The corresponding systemd stack trace was:</description></item><item><title>Metrics data model notes</title><link>https://blog.jpeach.org/posts/2017/07/metrics-data-model-notes/</link><pubDate>Mon, 24 Jul 2017 13:19:38 +1000</pubDate><guid>https://blog.jpeach.org/posts/2017/07/metrics-data-model-notes/</guid><description>Some notes on the data models of various metrics collection systems.
Performance Co-Pilot Performance Co-Pilot is a metrics collection and visualization system heavily inspired by SNMP. PCP originates in the systems monitoring world.
PCP has a fairly rich vocabulary to describe metrics according to their type, semantics, dimensions and scale.
type is the fundamental data type of the metric, eg. string, uint32, uint64, double, binary
semantics describe the logical behaviour of a metric and can be counter, instant or discrete.</description></item><item><title>Using Address Sanitizer with TrafficServer</title><link>https://blog.jpeach.org/posts/2016/11/using-address-sanitizer-with-trafficserver/</link><pubDate>Fri, 04 Nov 2016 13:21:54 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/11/using-address-sanitizer-with-trafficserver/</guid><description>Verifying Traffic Server with AddressSanitizer is fairly straight-forward. On Linux, you need recent gcc or clang and the libasan library. On OS X, libasan wasn&amp;rsquo;t present, so I just switched to Linux ;)
You should give --enable-asan to configure when you build. The build system will enable ASAN on all the parts that should have it. Then, whenever you run you will get ASAN checking memory state.
LeakSanitizer reports leaks from an atexit(3) handler, so you need to ensure that the program exits rather than calls _exit(2) or dumps core.</description></item><item><title>Using wrk with proxies</title><link>https://blog.jpeach.org/posts/2016/09/using-wrk-with-proxies/</link><pubDate>Fri, 30 Sep 2016 13:27:30 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/09/using-wrk-with-proxies/</guid><description>Based on this extremely helpful post, a slight extension to make it easier to use wrk with a HTTP proxy.
url = &amp;#39;&amp;#39; host = &amp;#39;&amp;#39; init = function(args) url = args[1] -- proxy needs absolute URL -- Capture the hostname from the target URL. _, _, host = string.find(url, &amp;#39;http://([^/]+)/&amp;#39;) end request = function() return wrk.format(&amp;#34;GET&amp;#34;, url, { Host = host }) end Usage is like this:
$ wrk -s proxy.</description></item><item><title>Python pip HTTPS proxying</title><link>https://blog.jpeach.org/posts/2016/09/python-pip-https-proxying/</link><pubDate>Sat, 17 Sep 2016 13:29:10 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/09/python-pip-https-proxying/</guid><description>I investigated an issue where pip fails to establish a TLS tunnel through a HTTP proxy because the proxy response with 400 (Bad Request).
It turns out tht pip sends this CONNECT request:
CONNECT pypi.python.org:443 HTTP/1.0 Now, HTTP/1.1 requires a Host header, so 400 would be the correct response in that case. CONNECT wasn&amp;rsquo;t defined in the original HTTP/1.0 RFC 1945, but Bryan Call pointed me to the draft-luotonen-web-proxy-tunneling so I guess at one point this was a thing.</description></item><item><title>Loading the “rJava” package into RStudio</title><link>https://blog.jpeach.org/posts/2016/09/loading-the-rjava-package-into-rstudio/</link><pubDate>Sun, 11 Sep 2016 13:30:50 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/09/loading-the-rjava-package-into-rstudio/</guid><description>Poking around the interwebs, everyone seems to get into trouble loading the rJava package into RStudio. It seems like there are enough people trying to make this work that it should just work out of the box, but then again, what do I know?
Here&amp;rsquo;s what worked for me:
jdk &amp;lt;- system2(&amp;#34;/usr/libexec/java_home&amp;#34;, stdout=TRUE) dyn.load(paste(jdk, &amp;#34;jre/lib/server/libjvm.dylib&amp;#34;, sep=&amp;#34;/&amp;#34;)) library(&amp;#34;rJava&amp;#34;) This manually figures out where libjvm.dylib is and loads it prior to opening the rJava library.</description></item><item><title>Dealing with relative indices in Lua APIs</title><link>https://blog.jpeach.org/posts/2016/09/dealing-with-relative-indices-in-lua-apis/</link><pubDate>Sun, 04 Sep 2016 13:34:14 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/09/dealing-with-relative-indices-in-lua-apis/</guid><description>When you use the Lua C API to implement custom Lua bindings, you inevitably end up with internal helper functions that accept a Lua stack index. Lua stack indicies can be positive, which indicates an index from the bottom of the stack or negative, which is an index from the top of the stack. It is extremely common to pass -1 to functions to indicate they should operate on the value at the top of the stack.</description></item><item><title>Updating go_resources in Homebrew</title><link>https://blog.jpeach.org/posts/2016/06/updating-go_resources-in-homebrew/</link><pubDate>Mon, 27 Jun 2016 15:13:02 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/06/updating-go_resources-in-homebrew/</guid><description>This is a quick note to myself about how to update the go_resources in a Homebrew formula.
First, install godep and the Homebrew dev tools:
$ cd $GOPATH $ go get -u github.com/tools/godep $ brew tap homebrew/dev-tools Next, generate a Godeps file in your Go project
$ cd $GOPATH/src/github.com/me/my-project $ $GOPATH/bin/godep save . Now you can get brew to generated the go_resources that you can just paste into your formula:</description></item><item><title>Miniature guide to building Clang from source</title><link>https://blog.jpeach.org/posts/2016/06/miniature-guide-to-building-clang-from-source/</link><pubDate>Wed, 22 Jun 2016 15:14:30 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/06/miniature-guide-to-building-clang-from-source/</guid><description>First checkout the sources:
$ cd ~/src $ git clone http://llvm.org/git/llvm.git $ cd ~/src/llvm/projects $ git clone http://llvm.org/git/compiler-rt.git $ git clone http://llvm.org/git/libcxx.git $ git clone http://llvm.org/git/libcxxabi.git $ cd ~/src/llvm/tools $ git clone http://llvm.org/git/clang.git $ cd ~/src/llvm/tools/clang/tools $ git clone http://llvm.org/git/clang-tools-extra.git extra Next, do the build:
$ mkdir -p ~/src/llvm/build $ cd ~/src/llvm/build $ cmake -DCMAKE_INSTALL_PREFIX=/opt/clang -DCMAKE_BUILD_TYPE=RelWithDebInfo .. $ make -j$(getconf _NPROCESSORS_ONLN) $ sudo make install On OS X you should also pass -DDEFAULT_SYSROOT=$(xcrun -show-sdk-path) to the cmake command.</description></item><item><title>Disabling tmp on tempfs for Fedora23</title><link>https://blog.jpeach.org/posts/2016/04/disabling-tmp-on-tempfs-for-fedora23/</link><pubDate>Thu, 28 Apr 2016 15:17:22 +1000</pubDate><guid>https://blog.jpeach.org/posts/2016/04/disabling-tmp-on-tempfs-for-fedora23/</guid><description>Well, Fedora 23 seems to default to placing /tmp in a tiny tmpfs volume, which easily fills, breaking things you need, like dnf.
Fairly annoying, but the fix, from the wiki page is straightforward:
% sudo systemctl mask tmp.mount % sudo reboot</description></item><item><title>GNU make dependency generation</title><link>https://blog.jpeach.org/posts/2015/11/gnu-make-dependency-generation/</link><pubDate>Mon, 02 Nov 2015 15:18:27 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/11/gnu-make-dependency-generation/</guid><description>Although I would normally use automake, recently I needed to write a Makefile by hand, so I went down the path of figuring out how to get gcc to generate dependency files as a side-effect of compilation:
# Build rule for compiling C++ with dependecy generation as a side-effect. The # dependencies go into a .deps directory at the same level as the source file. %.o: %.cpp @$(MKDIR) $(*D)/.deps $(CXX) $(CXXFLAGS) $(CPPFLAGS) -MP -MF $(*D)/.</description></item><item><title>Maybe libtool is not that bad</title><link>https://blog.jpeach.org/posts/2015/08/maybe-libtool-is-not-that-bad/</link><pubDate>Tue, 25 Aug 2015 15:20:28 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/08/maybe-libtool-is-not-that-bad/</guid><description>I’m considering taking back all the bad things I have said about libtool It turns out that by using libltdl it is possible to generate plugins that can be built a statically or as shared objects. I know it’s not too bad to implement that in a custom build, but as I understood more about libtool, this turns out to be relatively clean.
This is the best introduction to using libltdl that I have found.</description></item><item><title>Per-module logging with glog</title><link>https://blog.jpeach.org/posts/2015/07/per-module-logging-with-glog/</link><pubDate>Thu, 09 Jul 2015 15:22:15 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/07/per-module-logging-with-glog/</guid><description>I spent a few hours trying to implement per-module logging in the Mesos logging toggle. The Google logging library supports the --vmodule flags to toggle the logging level on a per-module basis, which looked promising. You can set this at run time using the SelVLOGLevel API.
Unfortunately, the implementation of the VLOG_IS_ON macro is such that you need to set a per-module log level before the logging call site is hit for the first time, so this is clearly intended only for startup.</description></item><item><title>What I learned about Linux AIO today</title><link>https://blog.jpeach.org/posts/2015/06/what-i-learned-about-linux-aio-today/</link><pubDate>Wed, 10 Jun 2015 15:24:06 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/06/what-i-learned-about-linux-aio-today/</guid><description> There is no filesystem that implements the AIO fsync operation. When performing AIO reads on vboxfs (VirtualBox filesystem) it will return -EPROTO. No idea why that happens. Neither vboxfs nor tmpfs suport O_DIRECT. AIO seems to work as advertised on XFS, with or without O_DIRECT.</description></item><item><title>Building the Mesos documentation</title><link>https://blog.jpeach.org/posts/2015/04/building-the-mesos-documentation/</link><pubDate>Wed, 22 Apr 2015 15:25:24 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/04/building-the-mesos-documentation/</guid><description>I made a minor change to the documentation and I wanted to test it, so I had to figure out how to build the Mesos documentation. I’m doing this in a CentOS 7 VM, but I guess something similar would work on different platforms.
First, you need to know that Mesos uses Middleman to build the website from a set of markdown files. Next, you need to know that while the documentation itself is in the main Mesos repository, the Middleman configuration and site tooling is in a separate SVN repository.</description></item><item><title>Building Mesos on OS X</title><link>https://blog.jpeach.org/posts/2015/04/building-mesos-on-os-x/</link><pubDate>Thu, 16 Apr 2015 15:28:08 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/04/building-mesos-on-os-x/</guid><description>So when you build Mesos on OS X, you have to use Homebrew to install a bunch of dependencies. I was momentarily stumped by the fact that linking the apr package with brew link --force seemed to not make the headers available. Then I realized that you are supposed to use apr-1-config to find the headers location.
Like this:
$ ./configure --prefix=/opt/mesos --with-apr=$(apr-1-config --prefix)</description></item><item><title>Leak detection with tcmalloc</title><link>https://blog.jpeach.org/posts/2015/01/leak-detection-with-tcmalloc/</link><pubDate>Wed, 14 Jan 2015 15:29:43 +1000</pubDate><guid>https://blog.jpeach.org/posts/2015/01/leak-detection-with-tcmalloc/</guid><description>tcmalloc has a built-in leak detection mechanism. It took me a couple of tries to figure out how to work it, even after reading the documentation. At least on Centos 7, the trick is to make sure you install the pprof package as well as gperftools-libs package. You will also need to set the PPROF_PATH environment variable so that the tcmalloc runtime can find proof. If you don&amp;rsquo;t do this, then the leaks report will not resolve symbols, so the stack traces will not be that useful.</description></item><item><title>TLS best practices</title><link>https://blog.jpeach.org/posts/2014/10/tls-best-practices/</link><pubDate>Tue, 14 Oct 2014 15:35:36 +1000</pubDate><guid>https://blog.jpeach.org/posts/2014/10/tls-best-practices/</guid><description>Well after groveling around trying to find documentation on the right way to configure TLS, I came up with
NIST guidelines (thanks Wikipedia) TLS Recommendations BCP (draft) Just stashing these link here for next time &amp;hellip;</description></item><item><title>Cannot extend ID. It is not part of highstate</title><link>https://blog.jpeach.org/posts/2013/09/cannot-extend-id.-it-is-not-part-of-highstate/</link><pubDate>Thu, 19 Sep 2013 15:39:12 +1000</pubDate><guid>https://blog.jpeach.org/posts/2013/09/cannot-extend-id.-it-is-not-part-of-highstate/</guid><description>I spent quite a while scratching my head over the following error message from Salt:
Cannot extend ID trafficserver in &amp;quot;base:trafficserver.collector&amp;quot;. It is not part of the high state. This actually means that you used a requisite clause like watch_in to inject a dependency into a state that Salt cannot resolve. I filed bug 7336.</description></item><item><title>Creating multiple resources with Salt</title><link>https://blog.jpeach.org/posts/2013/08/creating-multiple-resources-with-salt/</link><pubDate>Mon, 05 Aug 2013 15:41:19 +1000</pubDate><guid>https://blog.jpeach.org/posts/2013/08/creating-multiple-resources-with-salt/</guid><description>I wanted to create a Salt Stack state that manages multiple directories. I figured that there was a way to do this, but could not see a good example in the documentation. Fortunately, the very helpful #salt IRC channel pointed me to the answer:
hierarchy: file.directory: - user: root - group: root - mode: 755 - makedirs: True - names: - /var/lib/hierarchy - /var/lib/hierarchy/a - /var/lib/hierarchy/b - /var/lib/hierarchy/b/c - /var/lib/hierarchy/b/c/d</description></item><item><title>Debugging iPXE from the iLO console</title><link>https://blog.jpeach.org/posts/2013/07/debugging-ipxe-from-the-ilo-console/</link><pubDate>Wed, 24 Jul 2013 15:44:00 +1000</pubDate><guid>https://blog.jpeach.org/posts/2013/07/debugging-ipxe-from-the-ilo-console/</guid><description>So I&amp;rsquo;ve been trying to get iPXE chainloading to work and I&amp;rsquo;ve been using the iLO virtual serial console over SSH to verify and debug. iPXE has a DCHP debug build option which you can enable by doing make bin/undionly.kpxe DEBUG=dhcp. However, when you do this, you will find that each line of output on the iLO virtual serial console output overwrites the previous line, creating a big illegible mess. Fortunately, you can build iPXE with only serial output support, so that you can actually read the debug messages on the iLO virtual serial console.</description></item><item><title>iLO SSH key management</title><link>https://blog.jpeach.org/posts/2013/07/ilo-ssh-key-management/</link><pubDate>Tue, 02 Jul 2013 15:49:11 +1000</pubDate><guid>https://blog.jpeach.org/posts/2013/07/ilo-ssh-key-management/</guid><description>A few notes and rants about managing SSH keys with HP’s extremely annoying iLO interface.
The iLO ssh console does not support fetching SSH keys over HTTPS. This prevents you keeping them somewhere useful like github. When you upload a SSH key and it fails, the iLO web interface will tell you that it needs a PEM-formatted DSA public key. You will find that ssh-keygen has no way to produce this.</description></item><item><title>Argument parsing in Traffic Server plugins</title><link>https://blog.jpeach.org/posts/2012/12/argument-parsing-in-traffic-server-plugins/</link><pubDate>Tue, 18 Dec 2012 15:57:55 +1000</pubDate><guid>https://blog.jpeach.org/posts/2012/12/argument-parsing-in-traffic-server-plugins/</guid><description>When you write a new Traffic Server plugin, you have to choose whether to write a remap plugin, a global plugin or both. There are different plugin entry points for global and remap plugins and you will find yourself having to parse command-line argument from two different entry points:
tsapi void TSPluginInit(int argc, const char* argv[]); tsapi TSReturnCode TSRemapNewInstance(int argc, char* argv[], void** ih, char* errbuf, int errbuf_size); Since we are parsing command-line options, it makes sense to use getopt or getopt_long to do the parsing.</description></item><item><title>TrafficServer GET request walkthrough</title><link>https://blog.jpeach.org/posts/2012/12/trafficserver-get-request-walkthrough/</link><pubDate>Tue, 18 Dec 2012 15:51:33 +1000</pubDate><guid>https://blog.jpeach.org/posts/2012/12/trafficserver-get-request-walkthrough/</guid><description>Traffic Server request processing can be a little complex, with multiple state machines working at the same time and a lot of objects interacting in complex ways, so I thought it would be fun to reverse engineer the code flow from a log trace. I guess that it wasn&amp;rsquo;t as much fun as I had hoped, but it was educational.
This is a GET request with a Range header. The requested document is not currently cached, so Traffic Server just proxies the request.</description></item></channel></rss>