<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Austin Tripp's website</title><link>https://austintripp.ca/</link><description>Austin Tripp's personal website</description><atom:link href="https://austintripp.ca/rss.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 &lt;a href="mailto:austin.james.tripp[at]gmail.com"&gt;Austin Tripp&lt;/a&gt; MIT License</copyright><lastBuildDate>Sun, 22 Feb 2026 15:52:52 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Behaviour-based evaluations in Bayesian optimization</title><link>https://austintripp.ca/blog/2026-02-21-behaviour-based-bo-evals/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;I think the Bayesian optimization (BO) research community needs to change its
evaluation practices. In a &lt;a href="https://austintripp.ca/blog/2026-01-25-model-centric-bo/"&gt;previous post&lt;/a&gt;
I explained how I think users should generally bring their own model to BO and
try to calibrate it to their beliefs/expectations. In &lt;em&gt;this&lt;/em&gt; post, I explain
why existing evaluation practices are not well-suited to this stance on BO, and
outline a different approach to BO evaluation based on algorithm behaviour.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-02-21-behaviour-based-bo-evals/"&gt;Read more…&lt;/a&gt; (8 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><category>opinion</category><guid>https://austintripp.ca/blog/2026-02-21-behaviour-based-bo-evals/</guid><pubDate>Sat, 21 Feb 2026 22:00:00 GMT</pubDate></item><item><title>Rebranding BO away from "black-box" and towards "model-based"</title><link>https://austintripp.ca/blog/2026-02-16-bo-rebrand/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;In a recent blog post (&lt;a href="https://austintripp.ca/blog/2026-01-25-model-centric-bo/"&gt;link&lt;/a&gt;) I described
my "model-centric" view of Bayesian optimization (BO), essentially arguing that
the model is the most important component of BO and BO users (and researchers)
should do more to get it right. Under the assumption that the reader broadly
agrees with the content of that post (or at least thinks this view of BO is one
of several valid views), here I want to argue that the BO community should
&lt;em&gt;re-brand&lt;/em&gt; itself more towards model-based optimization and away from black-box
optimization.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-02-16-bo-rebrand/"&gt;Read more…&lt;/a&gt; (6 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><category>opinion</category><guid>https://austintripp.ca/blog/2026-02-16-bo-rebrand/</guid><pubDate>Mon, 16 Feb 2026 15:00:00 GMT</pubDate></item><item><title>Clarifying noise vs model misspecification in Gaussian Process models (and its importance in BO)</title><link>https://austintripp.ca/blog/2026-02-14-noise-in-gp-models/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;The most common kind of Gaussian process (GP) model is:&lt;/p&gt;
&lt;p&gt;$$f \sim GP\left(\mu(\cdot), k(\cdot, \cdot)\right) $$&lt;/p&gt;
&lt;p&gt;$$ y \sim N(f(x), \sigma^2) $$&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-02-14-noise-in-gp-models/"&gt;Read more…&lt;/a&gt; (4 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><guid>https://austintripp.ca/blog/2026-02-14-noise-in-gp-models/</guid><pubDate>Sat, 14 Feb 2026 23:00:00 GMT</pubDate></item><item><title>Tyler Cowen's book on talent</title><link>https://austintripp.ca/blog/2026-02-13-talent/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;Cross-posting my summary of the book "Talent: How to Identify Energizers,
Creatives, and Winners Around the World" from my GitHub:
&lt;a href="https://github.com/AustinT/book-summaries/blob/master/non-fiction/2026-02-01-talent.md"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-02-13-talent/"&gt;Read more…&lt;/a&gt; (1 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><guid>https://austintripp.ca/blog/2026-02-13-talent/</guid><pubDate>Fri, 13 Feb 2026 00:00:00 GMT</pubDate></item><item><title>We are underselling the modularity of Bayesian optimization</title><link>https://austintripp.ca/blog/2026-02-09-bo-modularity/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;I think the Bayesian optimization (BO) community is vastly underselling one of
BO's most practically appealing features: &lt;em&gt;modularity&lt;/em&gt;. A "modular" algorithm
is one composed of individual parts which are largely exchangeable. I think BO
is extremely modular, and this is something users want in practice, but BO
papers rarely describe it in this way. This post outlines these points in more
detail and ends with some recommendations.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-02-09-bo-modularity/"&gt;Read more…&lt;/a&gt; (7 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><category>opinion</category><guid>https://austintripp.ca/blog/2026-02-09-bo-modularity/</guid><pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Why I don't care about toy benchmarks in BO</title><link>https://austintripp.ca/blog/2026-02-08-bo-benchmarks/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;A lot of Bayesian optimization (BO) papers include experiments on "toy"
functions. Examples of such toy functions are Ackley, Rosenbrock, Hartmann, and
Branin. I basically ignore these experiments when I read papers and skip to the
next section. When I review papers (and therefore cannot simply skip sections),
I find my eyes glazing over and feel bored: I don't care about these functions,
don't care about the results, and don't really understand why the authors ran
this experiment.&lt;/p&gt;
&lt;p&gt;For a while I assumed every BO researcher feels this way, but a bunch of recent
conversations have made it clear to me that many people do care. So, in this
post I explain why I don't care at all about benchmarks on toy functions (and
why you shouldn't either).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-02-08-bo-benchmarks/"&gt;Read more…&lt;/a&gt; (10 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><category>opinion</category><guid>https://austintripp.ca/blog/2026-02-08-bo-benchmarks/</guid><pubDate>Sun, 08 Feb 2026 22:00:00 GMT</pubDate></item><item><title>We have forgotten about utility functions in BO (whoops!)</title><link>https://austintripp.ca/blog/2026-01-27-forgotten-utility-bo/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;Bayesian decision theory is one of the best justifications for BO- particularly
for myopic acquisition functions like expected improvement. However, these
acquisition functions are only "optimal" if one's utility function is $u(y) =
y$ (the identity function). Have BO researchers (and BO users) basically
&lt;em&gt;forgotten&lt;/em&gt; to swap this out for "real" utility functions in practice? In this
post I argue that we have basically overlooked this detail (to our own
detriment). It's not &lt;em&gt;too&lt;/em&gt; hard to fix, but unfortunately the $u(y) = y$
assumption is actually quite deeply embedded, and completely removing it will
make things more complicated. Ultimately, despite the difficulty, I think we
should do it anyway.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-01-27-forgotten-utility-bo/"&gt;Read more…&lt;/a&gt; (8 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><category>opinion</category><guid>https://austintripp.ca/blog/2026-01-27-forgotten-utility-bo/</guid><pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate></item><item><title>My model-centric view of Bayesian optimization</title><link>https://austintripp.ca/blog/2026-01-25-model-centric-bo/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;A lot of conversations at NeurIPS this year made me think that I view the role
of the surrogate model in Bayesian optimization a bit differently than many
other researchers in the field, and this profoundly impacts my view of many
other aspects of BO. Therefore, the purpose of this post is to explain my view
and contrast it with what I believe is the more mainstream view.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-01-25-model-centric-bo/"&gt;Read more…&lt;/a&gt; (18 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>bayesian optimization</category><category>machine learning</category><category>opinion</category><guid>https://austintripp.ca/blog/2026-01-25-model-centric-bo/</guid><pubDate>Sun, 25 Jan 2026 22:00:00 GMT</pubDate></item><item><title>Predictions for ML/AI in 2026 (and 2025 predictions re-visited).</title><link>https://austintripp.ca/blog/2026-01-05-predictions26/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;Last year &lt;a href="https://austintripp.ca/blog/2025-01-01-neurips24-and-trends25/"&gt;I made a bunch of predictions about
ML&lt;/a&gt;, and since 2025 is over it's time
to grade myself and repeat this exercise.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-01-05-predictions26/"&gt;Read more…&lt;/a&gt; (4 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>machine learning</category><category>speculation</category><guid>https://austintripp.ca/blog/2026-01-05-predictions26/</guid><pubDate>Mon, 05 Jan 2026 01:00:00 GMT</pubDate></item><item><title>New Year's Resolutions for 2026 (and scoring 2025 resolutions)</title><link>https://austintripp.ca/blog/2026-01-05-resolutions26/</link><dc:creator>Austin Tripp</dc:creator><description>&lt;div&gt;&lt;p&gt;Happy 2026! Here is my self-assessment on &lt;a href="https://austintripp.ca/blog/2025-01-05-resolutions25/"&gt;last year's
goals&lt;/a&gt; and my goals for this year (2026).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://austintripp.ca/blog/2026-01-05-resolutions26/"&gt;Read more…&lt;/a&gt; (1 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>_recent-highlight</category><category>personal development</category><guid>https://austintripp.ca/blog/2026-01-05-resolutions26/</guid><pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate></item></channel></rss>