<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Transparency.dev Community Blog]]></title><description><![CDATA[Community blog for transparency.dev ecosystems]]></description><link>https://blog.transparency.dev</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 09:24:39 GMT</lastBuildDate><atom:link href="https://blog.transparency.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Building a Transparent Keyserver]]></title><description><![CDATA[Today, we are going to build a keyserver to lookup age public keys. That part is boring. What’s interesting is that we’ll apply the same transparency log technology as the Go Checksum Database to keep the keyserver operator honest and unable to surre...]]></description><link>https://blog.transparency.dev/building-a-transparent-keyserver</link><guid isPermaLink="true">https://blog.transparency.dev/building-a-transparent-keyserver</guid><dc:creator><![CDATA[Filippo Valsorda]]></dc:creator><pubDate>Fri, 19 Dec 2025 14:07:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766152137454/75eb4f40-b57e-480f-b765-9e345076862d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, we are going to build a keyserver to lookup <a target="_blank" href="https://age-encryption.org/">age</a> public keys. That part is boring. What’s interesting is that we’ll apply the same transparency log technology as the Go Checksum Database to keep the keyserver operator honest and unable to surreptitiously inject malicious keys, while still protecting user privacy and delivering a smooth UX. You can see the final result at <a target="_blank" href="http://keyserver.geomys.org">keyserver.geomys.org</a>. We’ll build it step-by-step, using modern tooling from the tlog ecosystem, integrating transparency in less than 500 lines.</p>
<p>I am extremely excited to write this post: it demonstrates how to use a technology that I strongly believe is key in protecting users and keeping centralized services accountable, and it’s the result of years of effort by me, the <a target="_blank" href="https://transparency.dev/">TrustFabric</a> team at Google, the <a target="_blank" href="https://www.sigsum.org/">Sigsum</a> team at <a target="_blank" href="https://www.glasklarteknik.se/">Glasklar</a>, and many others.</p>
<blockquote>
<p>This article by <a target="_blank" href="https://geomys.org">Geomys</a> is being cross-posted on <a target="_blank" href="https://words.filippo.io/keyserver-tlog">Cryptography Dispatches</a>.</p>
</blockquote>
<p>Let’s start by defining the goal: we want a secure and convenient way to fetch <a target="_blank" href="https://age-encryption.org/">age</a> public keys for other people and services.</p>
<p>The easiest and most usable way to achieve that is to build a centralized keyserver: a web service where you log in with your email address to set your public key, and other people can look up public keys by email address.</p>
<p><img src="https://assets.buttondown.email/images/d4f72ca6-0ff7-4b1b-8890-1b8f0cbfad90.png" alt="the keyserver web UI" /></p>
<p>Trusting the third party that operates the keyserver lets you solve identity, authentication, and spam by just delegating the responsibilities of checking email ownership and implementing rate limiting. The keyserver can send a link to the email address, and whoever receives it is authorized to manage the public key(s) bound to that address.</p>
<p>I had Claude Code <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/f191bdbadeb9c9c15ba0a9995dffdb20651710da">build the base service</a>, because it’s simple and not the interesting part of what we are doing today. There’s nothing special in the implementation: just a Go server, an SQLite database, a lookup API, a set API protected by a CAPTCHA that sends an email authentication link, and a Go CLI that calls the lookup API.</p>
<h2 id="heading-transparency-logs-and-accountability-for-centralized-services">Transparency logs and accountability for centralized services</h2>
<p>A lot of problems are shaped like this and are much more solvable with a trusted third party: PKIs, package registries, voting systems… Sometimes the trusted third party is encapsulated behind a level of indirection, and we talk about Certificate Authorities, but it’s the same concept.</p>
<p>Centralization is so appealing that even the OpenPGP ecosystem embraced it: after the <a target="_blank" href="https://words.filippo.io/openpgp-is-broken/">SKS pool was killed by spam</a>, a <a target="_blank" href="https://keys.openpgp.org/">new OpenPGP keyserver</a> was built which is just a centralized, email-authenticated database of public keys. Its <a target="_blank" href="https://keys.openpgp.org/about/faq/#third-party-signatures">FAQ</a> claims they don’t wish to be a CA, but also explains they don’t support the (dubiously effective) Web-of-Trust at all, so effectively they can only act as a trusted third party.</p>
<p>The obvious downside of a trusted third party is, well, trust. You need to trust the operator, but also whoever will control the operator in the future, and also the operator’s security practices. That’s asking a lot, especially these days, and a malicious or compromised keyserver could provide fake public keys to targeted victims with little-to-no chance of detection.</p>
<p><strong>Transparency logs are a technology for applying cryptographic accountability to centralized systems with no UX degradation.</strong></p>
<p>A transparency log or tlog is an append-only, globally consistent list of entries, with efficient cryptographic proofs of inclusion and consistency. The log operator appends entries to the log, which can be tuples like <em>(package, version, hash)</em> or <em>(email, public key)</em>. The clients verify an inclusion proof before accepting an entry, guaranteeing that the log operator will have to stand by that entry in perpetuity and to the whole world, with no way to hide it or disown it. As long as <em>someone</em> who can check the authenticity of the entry will <em>eventually</em> check (or “monitor”) the log, the client can trust that malfeasance will be caught.</p>
<p>Effectively, a tlog lets the log operator stake their reputation to borrow time for collective, potentially manual verification of the log’s entries. This is a middle-ground between impractical local verification mechanisms like the <a target="_blank" href="https://en.wikipedia.org/wiki/Web_of_trust">Web of Trust</a>, and fully trusted mechanisms like centralized X.509 PKIs.</p>
<blockquote>
<p>If you’d like a longer introduction, <a target="_blank" href="https://youtu.be/SOfOe_z37jQ">my Real World Crypto 2024 talk</a> presents both the technical functioning and abstraction of modern transparency logs.</p>
</blockquote>
<p>There is a whole ecosystem of interoperable tlog tools and publicly available infrastructure built around <a target="_blank" href="https://c2sp.org/">C2SP</a> specifications. That’s what we are going to use today to add a tlog to our keyserver.</p>
<blockquote>
<p>If you want to catch up with the tlog ecosystem, <a target="_blank" href="https://www.youtube.com/watch?v=B-0ZW1ovPmk">my 2025</a> <a target="_blank" href="http://Transparency.dev">Transparency.dev</a> <a target="_blank" href="https://www.youtube.com/watch?v=B-0ZW1ovPmk">Summit Keynote</a> maps out the tools, applications, and specifications.</p>
</blockquote>
<h3 id="heading-tlogs-vs-certificate-transparency-vs-key-transparency">tlogs vs Certificate Transparency vs Key Transparency</h3>
<p>If you are familiar with Certificate Transparency, tlogs are derived from CT, but with a few major differences. Most importantly, there is no separate entry producer (in CT, the CAs) and log operator; moreover, clients check actual inclusion proofs instead of SCTs; finally, there are stronger split-view protections, as we will see below. The <a target="_blank" href="https://c2sp.org/static-ct-api">Static CT API</a> and <a target="_blank" href="https://sunlight.dev">Sunlight</a> CT log implementation were a first <a target="_blank" href="https://letsencrypt.org/2025/06/11/reflections-on-a-year-of-sunlight">successful</a> step in moving CT towards the tlog ecosystem, and a proposed design called <a target="_blank" href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/">Merkle Tree Certificates</a> redesigns the WebPKI to have tlog-like and tlog-interoperable transparency.</p>
<p>In my experience, it’s best <em>not</em> to think about CT when learning about tlogs. A better production example of a tlog is the <a target="_blank" href="https://golang.org/design/25530-sumdb">Go Checksum Database</a>, where Google logs the module name, version, and hash for every module version observed by the Go Modules Proxy. The module fetches happen over regular HTTPS, so there is no publicly-verifiable proof of their authenticity. Instead, the central party appends every observation to the tlog, so that any misbehavior can be caught. The <code>go get</code> command verifies inclusion proofs for every module it downloads, protecting 100% of the ecosystem, without requiring module authors to manage keys.</p>
<blockquote>
<p>Katie Hockman gave <a target="_blank" href="https://www.youtube.com/watch?v=KqTySYYhPUE">a great talk on the Go Checksum Database</a> at GopherCon 2019.</p>
</blockquote>
<p>You might also have heard of <a target="_blank" href="https://engineering.fb.com/2023/04/13/security/whatsapp-key-transparency/">Key Transparency</a>. KT is an overlapping technology that was deployed by Apple, WhatsApp, and Signal amongst others. It has similar goals, but picks different tradeoffs that involve significantly more complexity, in exchange for better privacy and scalability in some settings.</p>
<h2 id="heading-a-tlog-for-our-keyserver">A tlog for our keyserver</h2>
<p>Ok, so how do we apply a tlog to our email-based keyserver?</p>
<p>It’s pretty simple, and we can do it with <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/d031d34ec656156d51df074ae8823edd60329d92">a 250-line diff</a> using <a target="_blank" href="https://github.com/transparency-dev/tessera">Tessera</a> and <a target="_blank" href="https://filippo.io/torchwood">Torchwood</a>. Tessera is a general-purpose tlog implementation library, which can be backed by object storage or a POSIX filesystem. For our keyserver, we’ll use the latter backend, which stores the whole tlog in a directory according to the <a target="_blank" href="http://c2sp.org/tlog-tiles">c2sp.org/tlog-tiles</a> specification.</p>
<pre><code class="lang-go">s, err := note.NewSigner(os.Getenv(<span class="hljs-string">"LOG_KEY"</span>))
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Fatalln(<span class="hljs-string">"failed to create checkpoint signer:"</span>, err)
}
v, err := torchwood.NewVerifierFromSigner(os.Getenv(<span class="hljs-string">"LOG_KEY"</span>))
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Fatalln(<span class="hljs-string">"failed to create checkpoint verifier:"</span>, err)
}
policy := torchwood.ThresholdPolicy(<span class="hljs-number">2</span>, torchwood.OriginPolicy(v.Name()),
    torchwood.SingleVerifierPolicy(v))

driver, err := posix.New(ctx, posix.Config{
    Path: *logPath,
})
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Fatalln(<span class="hljs-string">"failed to create log storage driver:"</span>, err)
}

<span class="hljs-comment">// Since this is a low-traffic but interactive server, disable batching to</span>
<span class="hljs-comment">// remove integration latency for the first request. Keep a 1s checkpoint</span>
<span class="hljs-comment">// interval not to hit the witnesses too often; this will be observed only</span>
<span class="hljs-comment">// if two requests come in quick succession. Finally, only publish a</span>
<span class="hljs-comment">// checkpoint once a day if there are no new entries, making the average qps</span>
<span class="hljs-comment">// on witnesses low. Poll for new checkpoints quickly since it should be</span>
<span class="hljs-comment">// just a read from a hot filesystem cache.</span>
checkpointInterval := <span class="hljs-number">1</span> * time.Second
<span class="hljs-keyword">if</span> testing.Testing() {
    checkpointInterval = <span class="hljs-number">100</span> * time.Millisecond
}
appender, shutdown, logReader, err := tessera.NewAppender(ctx, driver, tessera.NewAppendOptions().
    WithCheckpointSigner(s).
    WithBatching(<span class="hljs-number">1</span>, tessera.DefaultBatchMaxAge).
    WithCheckpointInterval(checkpointInterval).
    WithCheckpointRepublishInterval(<span class="hljs-number">24</span>*time.Hour))
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Fatalln(<span class="hljs-string">"failed to create log appender:"</span>, err)
}
<span class="hljs-keyword">defer</span> shutdown(context.Background())
awaiter := tessera.NewPublicationAwaiter(ctx, logReader.ReadCheckpoint, <span class="hljs-number">25</span>*time.Millisecond)
</code></pre>
<p>Every time a user sets their key, we append an encoded <em>(email, public key)</em> entry to the tlog, and we store the tlog entry index in the database.</p>
<pre><code class="lang-diff"><span class="hljs-addition">+    // Add to transparency log</span>
<span class="hljs-addition">+    if strings.ContainsAny(email, "\n") {</span>
<span class="hljs-addition">+        http.Error(w, "Invalid email format", http.StatusBadRequest)</span>
<span class="hljs-addition">+        return</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-addition">+    entry := tessera.NewEntry(fmt.Appendf(nil, "%s\n%s\n", email, pubkey))</span>
<span class="hljs-addition">+    index, _, err := s.awaiter.Await(r.Context(), s.appender.Add(r.Context(), entry))</span>
<span class="hljs-addition">+    if err != nil {</span>
<span class="hljs-addition">+        http.Error(w, "Failed to add to transparency log", http.StatusInternalServerError)</span>
<span class="hljs-addition">+        log.Printf("transparency log error: %v", err)</span>
<span class="hljs-addition">+        return</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-addition">+</span>
     // Store in database
<span class="hljs-deletion">-    if err := s.storeKey(email, pubkey); err != nil {</span>
<span class="hljs-addition">+    if err := s.storeKey(email, pubkey, int64(index.Index)); err != nil {</span>
         http.Error(w, "Failed to store key", http.StatusInternalServerError)
         log.Printf("database error: %v", err)
         return
     }
</code></pre>
<p>The lookup API produces a proof from the index and provides it to the client.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *Server)</span> <span class="hljs-title">makeSpicySignature</span><span class="hljs-params">(ctx context.Context, index <span class="hljs-keyword">int64</span>)</span> <span class="hljs-params">([]<span class="hljs-keyword">byte</span>, error)</span></span> {
    checkpoint, err := s.reader.ReadCheckpoint(ctx)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to read checkpoint: %v"</span>, err)
    }
    c, _, err := torchwood.VerifyCheckpoint(checkpoint, s.policy)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to parse checkpoint: %v"</span>, err)
    }
    p, err := tlog.ProveRecord(c.N, index, torchwood.TileHashReaderWithContext(
        ctx, c.Tree, tesserax.NewTileReader(s.reader)))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to create proof: %v"</span>, err)
    }
    <span class="hljs-keyword">return</span> torchwood.FormatProof(index, p, checkpoint), <span class="hljs-literal">nil</span>
}
</code></pre>
<p>The proof follows the <a target="_blank" href="http://c2sp.org/tlog-proof">c2sp.org/tlog-proof</a> specification. It looks like this</p>
<pre><code class="lang-plaintext">c2sp.org/tlog-proof@v1
index 1
CJdjppwZSa2A60oEpcdj/OFjVQyrkP3fu/Ot2r6smg0=

keyserver.geomys.org
2
HtFreYGe2VBtaf3Vf0AG0DAwEZ+H92HQqrx4dkrzk0U=

— keyserver.geomys.org FrMVCWmHnYfHReztLams2F3HUY6UMub3c5xu7+e8R8SAk9cxPKAB1fsQ6gFM16xwkvZ8p5aWaBf8km+M20eHErSfGwI=
</code></pre>
<p>and it combines a <a target="_blank" href="https://c2sp.org/tlog-checkpoint">checkpoint</a> (a signed snapshot of the log at a certain size), the index of the entry in the log, and a proof of inclusion of the entry in the checkpoint.</p>
<p>The client CLI receives the proof from the lookup API, checks the signature on the checkpoint from the built-in log public key, hashes the expected entry, and checks the inclusion proof for that hash and checkpoint. It can do all this without interacting further with the log.</p>
<pre><code class="lang-go">vkey := os.Getenv(<span class="hljs-string">"AGE_KEYSERVER_PUBKEY"</span>)
<span class="hljs-keyword">if</span> vkey == <span class="hljs-string">""</span> {
    vkey = defaultKeyserverPubkey
}
v, err := note.NewVerifier(vkey)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    fmt.Fprintf(os.Stderr, <span class="hljs-string">"Error: invalid keyserver public key: %v\n"</span>, err)
    os.Exit(<span class="hljs-number">1</span>)
}
policy := torchwood.ThresholdPolicy(<span class="hljs-number">2</span>, torchwood.OriginPolicy(v.Name()),
    torchwood.SingleVerifierPolicy(v))
</code></pre>
<pre><code class="lang-go"><span class="hljs-comment">// Verify spicy signature</span>
entry := fmt.Appendf(<span class="hljs-literal">nil</span>, <span class="hljs-string">"%s\n%s\n"</span>, result.Email, result.Pubkey)
<span class="hljs-keyword">if</span> err := torchwood.VerifyProof(policy, tlog.RecordHash(entry), []<span class="hljs-keyword">byte</span>(result.Proof)); err != <span class="hljs-literal">nil</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, fmt.Errorf(<span class="hljs-string">"failed to verify key proof: %w"</span>, err)
}
</code></pre>
<p>If you squint, you can see that the proof is really a “fat signature” for the entry, which you verify with the log’s public key, just like you’d verify an Ed25519 or RSA signature for a message. I like to call them <em>spicy signatures</em> to stress how <strong>tlogs can be deployed anywhere you can deploy regular digital signatures</strong>.</p>
<h3 id="heading-monitoring">Monitoring</h3>
<p>What’s the point of all this though? The point is that anyone can look through the log to make sure the keyserver is not serving unauthorized keys for their email address! Indeed, just like <a target="_blank" href="https://alexgaynor.net/2024/sep/09/signatures-are-like-backups/">backups are useless without restores and signatures are useless without verification</a>, <a target="_blank" href="https://www.youtube.com/watch?v=uZXESulUuKA">tlogs are useless without monitoring</a>. That means we need to build tooling to monitor the log.</p>
<p>On the server side, it takes two lines of code, to expose the Tessera POSIX log directory.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Serve tlog-tiles log</span>
fs := http.StripPrefix(<span class="hljs-string">"/tlog/"</span>, http.FileServer(http.Dir(*logPath)))
mux.Handle(<span class="hljs-string">"GET /tlog/"</span>, fs)
</code></pre>
<p>On the client side, we add an <code>-all</code> flag to the CLI that reads all matching entries in the log.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">monitorLog</span><span class="hljs-params">(serverURL <span class="hljs-keyword">string</span>, policy torchwood.Policy, email <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">([]<span class="hljs-keyword">string</span>, error)</span></span> {
    f, err := torchwood.NewTileFetcher(serverURL+<span class="hljs-string">"/tlog"</span>, torchwood.WithUserAgent(<span class="hljs-string">"age-keylookup/1.0"</span>))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to create tile fetcher: %w"</span>, err)
    }
    c, err := torchwood.NewClient(f)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to create torchwood client: %w"</span>, err)
    }

    <span class="hljs-comment">// Fetch and verify checkpoint</span>
    signedCheckpoint, err := f.ReadEndpoint(context.Background(), <span class="hljs-string">"checkpoint"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to read checkpoint: %w"</span>, err)
    }
    checkpoint, _, err := torchwood.VerifyCheckpoint(signedCheckpoint, policy)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"failed to parse checkpoint: %w"</span>, err)
    }

    <span class="hljs-comment">// Fetch all entries up to the checkpoint size</span>
    <span class="hljs-keyword">var</span> pubkeys []<span class="hljs-keyword">string</span>
    <span class="hljs-keyword">for</span> i, entry := <span class="hljs-keyword">range</span> c.AllEntries(context.Background(), checkpoint.Tree, <span class="hljs-number">0</span>) {
        e, rest, ok := strings.Cut(<span class="hljs-keyword">string</span>(entry), <span class="hljs-string">"\n"</span>)
        <span class="hljs-keyword">if</span> !ok {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"malformed log entry %d: %q"</span>, i, <span class="hljs-keyword">string</span>(entry))
        }
        k, rest, ok := strings.Cut(rest, <span class="hljs-string">"\n"</span>)
        <span class="hljs-keyword">if</span> !ok || rest != <span class="hljs-string">""</span> {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"malformed log entry %d: %q"</span>, i, <span class="hljs-keyword">string</span>(entry))
        }
        <span class="hljs-keyword">if</span> e == email {
            pubkeys = <span class="hljs-built_in">append</span>(pubkeys, k)
        }
    }
    <span class="hljs-keyword">if</span> c.Err() != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"error fetching log entries: %w"</span>, c.Err())
    }

    <span class="hljs-keyword">return</span> pubkeys, <span class="hljs-literal">nil</span>
}
</code></pre>
<p>To enable effective monitoring, we also normalize email addresses by trimming spaces and lowercasing them, since users are unlikely to monitor all the variations. We do it before sending the login link, so normalization can't lead to impersonation.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Normalize email</span>
email = strings.TrimSpace(strings.ToLower(email))
</code></pre>
<p>A complete monitoring story would involve 3rd party services that monitor the log for you and email you if new keys are added, like <a target="_blank" href="https://www.gopherwatch.org/">gopherwatch</a> and <a target="_blank" href="https://sourcespotter.com/sumdb/">Source Spotter</a> do for the Go Checksum Database, but the <code>-all</code> flag is a start.</p>
<p>The <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/d031d34ec656156d51df074ae8823edd60329d92">full change</a> involves <em>5 files changed, 251 insertions(+), 6 deletions(-)</em>, plus tests, and includes a new keygen helper binary, the required database schema and help text and API changes, and web UI changes to show the proof.</p>
<h2 id="heading-privacy-with-vrfs">Privacy with VRFs</h2>
<p>We created a problem by implementing this tlog, though: now all the email addresses of our users are public! While this is ok for module names in the Go Checksum Database, allowing email address enumeration in our keyserver is a non-starter for privacy and spam reasons.</p>
<p>We could hash the email addresses, but that would still allow offline brute-force attacks. The right tool for the job is a Verifiable Random Function. You can think of a VRF as a hash with a private and public key: only you can produce a hash value, using the private key, but anyone can check that it’s the correct (and unique) hash value, using the public key.</p>
<p>Overall, implementing VRFs <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/52f07f1bcc7bb402b5f84a77a57dad8d9ce668e6">takes less than 130 lines</a> using the <a target="_blank" href="http://c2sp.org/vrf-r255">c2sp.org/vrf-r255</a> instantiation based on <a target="_blank" href="https://ristretto.group/">ristretto255</a>, implemented by <a target="_blank" href="http://filippo.io/mostly-harmless/vrf-r255">filippo.io/mostly-harmless/vrf-r255</a> (pending a more permanent location). Instead of the email address, we include the VRF hash in the log entry, and we save the VRF proof in the database.</p>
<pre><code class="lang-diff"><span class="hljs-addition">+        // Compute VRF hash and proof</span>
<span class="hljs-addition">+        vrfProof := s.vrf.Prove([]byte(email))</span>
<span class="hljs-addition">+        vrfHash := base64.StdEncoding.EncodeToString(vrfProof.Hash())</span>
<span class="hljs-addition">+</span>
         // Add to transparency log
<span class="hljs-deletion">-        entry := tessera.NewEntry(fmt.Appendf(nil, "%s\n%s\n", email, pubkey))</span>
<span class="hljs-addition">+        entry := tessera.NewEntry(fmt.Appendf(nil, "%s\n%s\n", vrfHash, pubkey))</span>
         index, _, err := s.awaiter.Await(r.Context(), s.appender.Add(r.Context(), entry))
         if err != nil {
             http.Error(w, "Failed to add to transparency log", http.StatusInternalServerError)
         }

         // [...]

         // Store in database
<span class="hljs-deletion">-        if err := s.storeKey(email, pubkey, int64(index.Index)); err != nil {</span>
<span class="hljs-addition">+        if err := s.storeKey(email, pubkey, int64(index.Index), vrfProof.Bytes()); err != nil {</span>
             http.Error(w, "Failed to store key", http.StatusInternalServerError)
             log.Printf("database error: %v", err)
             return
         }
</code></pre>
<p>The tlog proof format has space for application-specific opaque extra data, so we can store the VRF proof there, to keep the tlog proof self-contained.</p>
<pre><code class="lang-diff"><span class="hljs-deletion">-    return torchwood.FormatProof(index, p, checkpoint), nil</span>
<span class="hljs-addition">+    return torchwood.FormatProofWithExtraData(index, vrfProof, p, checkpoint), nil</span>
</code></pre>
<p>In the client CLI, we extract the VRF hash from the tlog proof’s extra data and verify it’s the correct hash for the email address.</p>
<pre><code class="lang-diff"><span class="hljs-addition">+    // Compute and verify VRF hash</span>
<span class="hljs-addition">+    vrfProofBytes, err := torchwood.ProofExtraData([]byte(result.Proof))</span>
<span class="hljs-addition">+    if err != nil {</span>
<span class="hljs-addition">+        return "", fmt.Errorf("failed to extract VRF proof: %w", err)</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-addition">+    vrfProof, err := vrf.NewProof(vrfProofBytes)</span>
<span class="hljs-addition">+    if err != nil {</span>
<span class="hljs-addition">+        return "", fmt.Errorf("failed to parse VRF proof: %w", err)</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-addition">+    vrfHash, err := vrfKey.Verify(vrfProof, []byte(email))</span>
<span class="hljs-addition">+    if err != nil {</span>
<span class="hljs-addition">+        return "", fmt.Errorf("failed to verify VRF proof: %w", err)</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-addition">+</span>
     // Verify spicy signature
<span class="hljs-deletion">-    entry := fmt.Appendf(nil, "%s\n%s\n", result.Email, result.Pubkey)</span>
<span class="hljs-addition">+    vrfHashB64 := base64.StdEncoding.EncodeToString(vrfHash)</span>
<span class="hljs-addition">+    entry := fmt.Appendf(nil, "%s\n%s\n", vrfHashB64, result.Pubkey)</span>
     if err := torchwood.VerifyProof(policy, tlog.RecordHash(entry), []byte(result.Proof)); err != nil {
         return "", fmt.Errorf("failed to verify key proof: %w", err)
     }
</code></pre>
<p>How do we do monitoring now, though? We need to add a new API that provides the VRF hash (and proof) for an email address.</p>
<pre><code class="lang-diff">     mux.HandleFunc("GET /manage", srv.handleManage)
     mux.HandleFunc("POST /setkey", srv.handleSetKey)
     mux.HandleFunc("GET /api/lookup", srv.handleLookup)
<span class="hljs-addition">+    mux.HandleFunc("GET /api/monitor", srv.handleMonitor)</span>
     mux.HandleFunc("POST /api/verify-token", srv.handleVerifyToken)
</code></pre>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *Server)</span> <span class="hljs-title">handleMonitor</span><span class="hljs-params">(w http.ResponseWriter, r *http.Request)</span></span> {
    email := r.URL.Query().Get(<span class="hljs-string">"email"</span>)
    <span class="hljs-keyword">if</span> email == <span class="hljs-string">""</span> {
        http.Error(w, <span class="hljs-string">"Email parameter required"</span>, http.StatusBadRequest)
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-comment">// Return as JSON</span>
    w.Header().Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)
    json.NewEncoder(w).Encode(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]any{
        <span class="hljs-string">"email"</span>:     email,
        <span class="hljs-string">"vrf_proof"</span>: s.vrf.Prove([]<span class="hljs-keyword">byte</span>(email)).Bytes(),
    })
}
</code></pre>
<p>On the client side, we use that API to obtain the VRF proof, we verify it, and we look for the VRF hash in the log instead of looking for the email address.</p>
<p>Attackers can still enumerate email addresses by hitting the public lookup or monitor API, but they’ve always been able to do that: serving such a public API is the point of the keyserver! With VRFs, we restored the original status quo: enumeration requires brute-forcing the online, rate-limited API, instead of having a full list of email addresses in the tlog (or hashes that can be brute-forced offline).</p>
<p>VRFs have a further benefit: if a user requests to be deleted from the service, we can’t remove their entries from the tlog, but we <em>can</em> stop serving the VRF for their email address from the lookup and monitor APIs. This makes it impossible to obtain the key history for that user, or even to check if they ever used the keyserver, but doesn’t impact monitoring for other users.</p>
<p>The <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/52f07f1bcc7bb402b5f84a77a57dad8d9ce668e6">full change</a> adding VRFs involves <em>3 files changed, 125 insertions(+), 13 deletions(-)</em>, plus tests.</p>
<h2 id="heading-anti-poisoning">Anti-poisoning</h2>
<p>We have one last marginal risk to mitigate: since we can’t ever remove entries from the tlog, what if someone inserts some unsavory message in the log by smuggling it in as a public key, like <code>age1llllllllllllllrustevangellsmstrlkef0rcellllllllllllq574n08</code>?</p>
<p>Protecting against this risk is called <em>anti-poisoning</em>. The risk to our log is relatively small, public keys have to be Bech32-encoded and short, so an attacker can’t usefully embed images or malware. Still, it’s easy enough to neutralize it: instead of the public keys, we put their hashes in the tlog entry, keeping the original public keys in a new table in the database, and serving them as part of the monitor API.</p>
<pre><code class="lang-diff">         // Compute VRF hash and proof
         vrfProof := s.vrf.Prove([]byte(email))
<span class="hljs-deletion">-        vrfHash := base64.StdEncoding.EncodeToString(vrfProof.Hash())</span>
<span class="hljs-addition">+</span>
<span class="hljs-addition">+        // Keep track of the unhashed key</span>
<span class="hljs-addition">+        if err := s.storeHistory(email, pubkey); err != nil {</span>
<span class="hljs-addition">+            http.Error(w, "Failed to store key history", http.StatusInternalServerError)</span>
<span class="hljs-addition">+            log.Printf("database error: %v", err)</span>
<span class="hljs-addition">+            return</span>
<span class="hljs-addition">+        }</span>

         // Add to transparency log
<span class="hljs-deletion">-        entry := tessera.NewEntry(fmt.Appendf(nil, "%s\n%s\n", vrfHash, pubkey))</span>
<span class="hljs-addition">+        h := sha256.New()</span>
<span class="hljs-addition">+        h.Write([]byte(pubkey))</span>
<span class="hljs-addition">+        entry := tessera.NewEntry(h.Sum(vrfProof.Hash())) // vrf-r255(email) || SHA-256(pubkey)</span>
         index, _, err := s.awaiter.Await(r.Context(), s.appender.Add(r.Context(), entry))
</code></pre>
<p>It’s <em>very important</em> that we persist the original key in the database before adding the entry to the tlog. Losing the original key would be indistinguishable from refusing to provide a malicious key to monitors.</p>
<p>On the client side, to do a lookup we just hash the public key when verifying the inclusion proof. To monitor in <code>-all</code> mode, we match the hashes against the list of original public keys provided by the server through the monitor API.</p>
<pre><code class="lang-diff">     var result struct {
         Email    string   `json:"email"`
         VRFProof []byte   `json:"vrf_proof"`
<span class="hljs-addition">+        History  []string `json:"history"`</span>
     }
</code></pre>
<pre><code class="lang-diff"><span class="hljs-addition">+    // Prepare map of hashes of historical keys</span>
<span class="hljs-addition">+    historyHashes := make(map[[32]byte]string)</span>
<span class="hljs-addition">+    for _, pk := range result.History {</span>
<span class="hljs-addition">+        h := sha256.Sum256([]byte(pk))</span>
<span class="hljs-addition">+        historyHashes[h] = pk</span>
<span class="hljs-addition">+    }</span>
</code></pre>
<pre><code class="lang-diff">     // Fetch all entries up to the checkpoint size
     var pubkeys []string
     for i, entry := range c.AllEntries(context.Background(), checkpoint.Tree, 0) {
<span class="hljs-deletion">-        e, rest, ok := strings.Cut(string(entry), "\n")</span>
<span class="hljs-deletion">-        if !ok {</span>
<span class="hljs-deletion">-            return nil, fmt.Errorf("malformed log entry %d: %q", i, string(entry))</span>
<span class="hljs-deletion">-        }</span>
<span class="hljs-deletion">-        k, rest, ok := strings.Cut(rest, "\n")</span>
<span class="hljs-deletion">-        if !ok || rest != "" {</span>
<span class="hljs-deletion">-            return nil, fmt.Errorf("malformed log entry %d: %q", i, string(entry))</span>
<span class="hljs-deletion">-        }</span>
<span class="hljs-deletion">-        if e == base64.StdEncoding.EncodeToString(vrfHash) {</span>
<span class="hljs-deletion">-            pubkeys = append(pubkeys, k)</span>
<span class="hljs-deletion">-        }</span>
<span class="hljs-addition">+        if len(entry) != 64+32 {</span>
<span class="hljs-addition">+            return nil, fmt.Errorf("invalid entry size at index %d", i)</span>
<span class="hljs-addition">+        }</span>
<span class="hljs-addition">+        if !bytes.Equal(entry[:64], vrfHash) {</span>
<span class="hljs-addition">+            continue</span>
<span class="hljs-addition">+        }</span>
<span class="hljs-addition">+        pk, ok := historyHashes[([32]byte)(entry[64:])]</span>
<span class="hljs-addition">+        if !ok {</span>
<span class="hljs-addition">+            return nil, fmt.Errorf("found unknown public key hash in log at index %d", i)</span>
<span class="hljs-addition">+        }</span>
<span class="hljs-addition">+        pubkeys = append(pubkeys, pk)</span>
     }
</code></pre>
<p>Our final log entry format is <code>vrf-r255(email) || SHA-256(pubkey)</code>. Designing the tlog entry is the most important part of deploying a tlog: it needs to include enough information to let monitors isolate all the entries relevant to them, but not enough information to pose privacy or poisoning threats.</p>
<p>The <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/194a603666e1e278785878a727c88f062725eabc">full change</a> providing anti-poisoning involves <em>2 files changed, 93 insertions(+), 19 deletions(-)</em>, plus tests.</p>
<h2 id="heading-non-equivocation-and-the-witness-network">Non-equivocation and the Witness Network</h2>
<p>We’re almost done! There’s still one thing to fix, and it <em>used</em> to be the hardest part.</p>
<p>To get the delayed, collective verification we need, all clients and monitors must see <em>consistent</em> views of the same log, where the log maintains its append-only property. This is called non-equivocation, or split-view protection. In other words, how do we stop the log operator from showing an inclusion proof for log A to a client, and then a different log B to the monitors?</p>
<p>Just like logging without a monitoring story is like signing without verification, logging without a non-equivocation story is just a complicated signature algorithm with no strong transparency properties.</p>
<p>This is the hard part because in the general case <a target="_blank" href="https://www.youtube.com/watch?v=_UHied0ZcIs&amp;list=PLwiyx1dc3P2J2jur9fm5U2ss9o2kHzNLB&amp;index=6&amp;pp=gAQBiAQBsAgC"><em>you can’t do it alone</em></a>. Instead, the tlog ecosystem has the concept of <a target="_blank" href="https://c2sp.org/tlog-witness">witness cosigners</a>: third-party operated services which cosign a checkpoint to attest that it is consistent with all the other checkpoints the witness observed for that log. Clients check these witness <a target="_blank" href="https://c2sp.org/tlog-cosignature">cosignatures</a> to get assurance that—unless a quorum of witnesses is colluding with the log—they are not being presented a split-view of the log.</p>
<p>These witnesses are <em>extremely</em> efficient to operate: the log provides the O(log N) consistency proof when requesting a cosignature, and the witness only needs to store the O(1) latest checkpoint it observed. All the potentially intensive verification is deferred and delegated to monitors, which can be sure to have the same view as all clients thanks to the witness cosignatures.</p>
<p>This efficiency makes it possible to operate witnesses for free as public benefit infrastructure. The <a target="_blank" href="https://witness-network.org/">Witness Network</a> collects public witnesses and maintains an open list of tlogs that the witnesses automatically configure.</p>
<p>For the Geomys instance of the keyserver, I generated a tlog key and then I sent <a target="_blank" href="https://github.com/transparency-dev/witness-network/pull/28">a PR to the Witness Network</a> to add the following lines to the testing log list.</p>
<pre><code class="lang-plaintext">vkey keyserver.geomys.org+16b31509+ARLJ+pmTj78HzTeBj04V+LVfB+GFAQyrg54CRIju7Nn8
qpd 1440
contact keyserver-tlog@geomys.org
</code></pre>
<p>This got my log configured in <a target="_blank" href="https://witness-network.org/witness-tables/">a handful of witnesses</a>, from which I picked three to build the default keyserver witness policy.</p>
<pre><code class="lang-plaintext">log keyserver.geomys.org+16b31509+ARLJ+pmTj78HzTeBj04V+LVfB+GFAQyrg54CRIju7Nn8
witness TrustFabric transparency.dev/DEV:witness-little-garden+d8042a87+BCtusOxINQNUTN5Oj8HObRkh2yHf/MwYaGX4CPdiVEPM https://api.transparency.dev/dev/witness/little-garden/
witness Mullvad witness.stagemole.eu+67f7aea0+BEqSG3yu9YrmcM3BHvQYTxwFj3uSWakQepafafpUqklv https://witness.stagemole.eu/
witness Geomys witness.navigli.sunlight.geomys.org+a3e00fe2+BNy/co4C1Hn1p+INwJrfUlgz7W55dSZReusH/GhUhJ/G https://witness.navigli.sunlight.geomys.org/
group public 2 TrustFabric Mullvad Geomys
quorum public
</code></pre>
<p>The policy format is based on <a target="_blank" href="https://git.glasklar.is/sigsum/core/sigsum-go/-/blob/main/doc/policy.md">Sigsum’s policies</a>, and it encodes the log’s public key and the witnesses’ public keys (for the clients) and submission URLs (for the log).</p>
<p>Tessera supports these policies directly. When minting a new checkpoint, it will reach out in parallel to all the witnesses, and return the checkpoint once it satisfies the policy. Configuration is trivial, and the added latency is minimal (less than one second).</p>
<pre><code class="lang-diff"><span class="hljs-addition">+    witnessPolicy := defaultWitnessPolicy</span>
<span class="hljs-addition">+    if path := os.Getenv("LOG_WITNESS_POLICY"); path != "" {</span>
<span class="hljs-addition">+        witnessPolicy, err = os.ReadFile(path)</span>
<span class="hljs-addition">+        if err != nil {</span>
<span class="hljs-addition">+            log.Fatalln("failed to read witness policy file:", err)</span>
<span class="hljs-addition">+        }</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-addition">+    witnesses, err := tessera.NewWitnessGroupFromPolicy(witnessPolicy)</span>
<span class="hljs-addition">+    if err != nil {</span>
<span class="hljs-addition">+        log.Fatalln("failed to create witness group from policy:", err)</span>
<span class="hljs-addition">+    }</span>

     // [...]

     appender, shutdown, logReader, err := tessera.NewAppender(ctx, driver, tessera.NewAppendOptions().
         WithCheckpointSigner(s).
         WithBatching(1, tessera.DefaultBatchMaxAge).
         WithCheckpointInterval(checkpointInterval).
<span class="hljs-deletion">-        WithCheckpointRepublishInterval(24*time.Hour))</span>
<span class="hljs-addition">+        WithCheckpointRepublishInterval(24*time.Hour).</span>
<span class="hljs-addition">+        WithWitnesses(witnesses, nil))</span>
</code></pre>
<p>On the client side, we can use Torchwood to parse the policy and use it directly with <a target="_blank" href="https://pkg.go.dev/filippo.io/torchwood@v0.8.1-0.20251217175055-7b37840a7b58#VerifyProof">VerifyProof</a> in place of the policy we were manually constructing from the log’s public key.</p>
<pre><code class="lang-diff"><span class="hljs-deletion">-    vkey := os.Getenv("AGE_KEYSERVER_PUBKEY")</span>
<span class="hljs-deletion">-    if vkey == "" {</span>
<span class="hljs-deletion">-        vkey = defaultKeyserverPubkey</span>
<span class="hljs-deletion">-    }</span>
<span class="hljs-addition">+    policyBytes := defaultPolicy</span>
<span class="hljs-addition">+    if policyPath := os.Getenv("AGE_KEYSERVER_POLICY"); policyPath != "" {</span>
<span class="hljs-addition">+        p, err := os.ReadFile(policyPath)</span>
<span class="hljs-addition">+        if err != nil {</span>
<span class="hljs-addition">+            fmt.Fprintf(os.Stderr, "Error: failed to read policy file: %v\n", err)</span>
<span class="hljs-addition">+            os.Exit(1)</span>
<span class="hljs-addition">+        }</span>
<span class="hljs-addition">+        policyBytes = p</span>
<span class="hljs-addition">+    }</span>
<span class="hljs-deletion">-    v, err := note.NewVerifier(vkey)</span>
<span class="hljs-deletion">-    if err != nil {</span>
<span class="hljs-deletion">-        fmt.Fprintf(os.Stderr, "Error: invalid keyserver public key: %v\n", err)</span>
<span class="hljs-deletion">-        os.Exit(1)</span>
<span class="hljs-deletion">-    }</span>
<span class="hljs-deletion">-    policy := torchwood.ThresholdPolicy(2, torchwood.OriginPolicy(v.Name()), torchwood.SingleVerifierPolicy(v))</span>
<span class="hljs-addition">+    policy, err := torchwood.ParsePolicy(policyBytes)</span>
<span class="hljs-addition">+    if err != nil {</span>
<span class="hljs-addition">+        fmt.Fprintf(os.Stderr, "Error: invalid policy: %v\n", err)</span>
<span class="hljs-addition">+        os.Exit(1)</span>
<span class="hljs-addition">+    }</span>
</code></pre>
<p>Again, if you squint you can see that just like tlog proofs are <em>spicy signatures</em>, the policy is a <em>spicy public key</em>. Verification is a deterministic, offline function that takes a policy/public key and a proof/signature, just like digital signature verification!</p>
<p>The policies are a <a target="_blank" href="https://en.wikipedia.org/wiki/Directed_acyclic_graph">DAG</a> that can get complex to match even the strictest uptime requirements. For example, you can require 3 out of 10 witness operators to cosign a checkpoint, where each operator can use any 1 out of N witness instances to do so.</p>
<p>The <a target="_blank" href="https://github.com/FiloSottile/torchwood/commit/e5b50faaad78fe2742e16a770be08746fc1f604e">full change</a> implementing witnessing involves <em>5 files changed, 43 insertions(+), 11 deletions(-)</em>, plus tests.</p>
<h2 id="heading-summing-up">Summing up</h2>
<p>We started with a simple centralized email-authenticated keyserver, and we turned it into a transparent, privacy-preserving, anti-poisoning, and witness-cosigned service.</p>
<p>We did that in four small steps using <a target="_blank" href="https://github.com/transparency-dev/tessera">Tessera</a>, <a target="_blank" href="https://filippo.io/torchwood">Torchwood</a>, and various <a target="_blank" href="https://c2sp.org/">C2SP</a> specifications.</p>
<pre><code class="lang-plaintext">cmd/age-keyserver: add transparency log of stored keys
    5 files changed, 259 insertions(+), 8 deletions(-)
cmd/age-keyserver: use VRFs to hide emails in the log
    3 files changed, 125 insertions(+), 13 deletions(-)
cmd/age-keyserver: hash age public key to prevent log poisoning
    2 files changed, 93 insertions(+), 19 deletions(-)
cmd/age-keyserver: add witness cosigning to prevent split-views
    5 files changed, 43 insertions(+), 11 deletions(-)
</code></pre>
<p>Overall, it took less than 500 lines.</p>
<blockquote>
<p>7 files changed, 472 insertions(+), 9 deletions(-)</p>
</blockquote>
<p>The UX is completely unchanged: there are no keys for users to manage, and the web UI and CLI work exactly like they did before. The only difference is the new <code>-all</code> functionality of the CLI, which allows holding the log operator accountable for all the public keys it could ever have presented for an email address.</p>
<p>The result is deployed live at <a target="_blank" href="http://keyserver.geomys.org">keyserver.geomys.org</a>.</p>
<h2 id="heading-future-work-efficient-monitoring-and-revocation">Future work: efficient monitoring and revocation</h2>
<p>This tlog system still has two limitations:</p>
<ol>
<li><p>To monitor the log, the monitor needs to download it all. This is probably fine for our little keyserver, and even for the Go Checksum Database, but it’s a scaling problem for the Certificate Transparency / Merkle Tree Certificates ecosystem.</p>
</li>
<li><p>The inclusion proof guarantees that the public key is in the log, not that it’s the <em>latest</em> entry in the log for that email address. Similarly, the Go Checksum Database can’t efficiently prove the Go Modules Proxy <code>/list</code> response is complete.</p>
</li>
</ol>
<p>We are working on a design called <a target="_blank" href="https://github.com/transparency-dev/incubator/tree/main/vindex">Verifiable Indexes</a> which plugs on top of a tlog to provide verifiable indexes or even map-reduce operations over the log entries. We expect VI to be production-ready before the end of 2026, while everything above is ready today.</p>
<p>Even without VI, the tlog provides strong accountability for our keyserver, enabling a secure UX that would have simply not been possible without transparency.</p>
<p>I hope this step-by-step demo will help you apply tlogs to your own systems. If you need help, you can join the <a target="_blank" href="http://Transparency.dev">Transparency.dev</a> <a target="_blank" href="https://transparency.dev/slack/">Slack</a>.</p>
]]></content:encoded></item><item><title><![CDATA[And that’s a wrap!]]></title><description><![CDATA[It’s been a month since the Transparency.dev Summit 2025, and what an event it was! In late October, over the course of two and a half days, implementers, operators, and clients of real world transparency systems came together in a beautiful event sp...]]></description><link>https://blog.transparency.dev/and-thats-a-wrap</link><guid isPermaLink="true">https://blog.transparency.dev/and-thats-a-wrap</guid><category><![CDATA[transparency.dev]]></category><category><![CDATA[community]]></category><category><![CDATA[transparency]]></category><category><![CDATA[summit]]></category><dc:creator><![CDATA[Katrina Joyce]]></dc:creator><pubDate>Mon, 24 Nov 2025 12:50:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763986591273/d2ee2885-14fc-4076-bd05-3045133ce5bb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It’s been a month since the <a target="_blank" href="https://transparency.dev/summit2025/">Transparency.dev Summit 2025</a>, and what an event it was! In late October, over the course of two and a half days, implementers, operators, and clients of real world transparency systems came together in a beautiful event space in Gothenburg, Sweden, to meet face to face, share best practices, and learn about the latest developments in the transparency community. Laughs were had, owls were drawn, free t-shirts were provided, and a life-size 2D cardboard cutout of an attendee unable to make it in person was present. What more could one want from such a gathering?</p>
<h2 id="heading-keynote-the-tlog-ecosystem"><strong>Keynote: The tlog Ecosystem</strong></h2>
<p>The event kicked off with an insightful <a target="_blank" href="https://youtu.be/B-0ZW1ovPmk">keynote</a> delivered by Filippo Valsorda, outlining the current state of affairs in transparency log ecosystems. Filippo gave a thorough overview of everything the transparency community has now, covering specifications, code, shared infrastructure and existing transparency applications, before looking to the future and discussing what comes next. With his keynote, Filippo gave context to the work being done by everyone in the room, tied together much of the content that followed in the next couple of days, and explained how it all fits into the bigger tlog ecosystem picture.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/B-0ZW1ovPmk">https://youtu.be/B-0ZW1ovPmk</a></div>
<p> </p>
<h2 id="heading-certificate-transparency-pushing-forward"><strong>Certificate Transparency pushing forward</strong></h2>
<p>Following the keynote, the first day of the summit focussed primarily on Certificate Transparency (CT). A wonderfully balanced set of talks included points of view from every type of actor that participates in the CT ecosystem, giving the audience a fully fleshed out picture of how CT functions today.</p>
<p>Log operators were represented by <a target="_blank" href="https://youtu.be/o9tEjADoJck">Philippe Boneff and Roger Ng</a> discussing running TesseraCT - a static CT log implementation. <a target="_blank" href="https://youtu.be/GrQN7c9EPhA">Matthew McPherrin</a> represented Let’s Encrypt as both a log operator and a Certificate Authority, talking about the challenges and specific insights that being both provides. Although not a CT log operator himself, <a target="_blank" href="https://youtu.be/MRmjMduVMTk">Dennis Jackson</a> provided a novel idea for how to reduce the load on log operators by serving the content of CT logs as static ct tiles over BitTorrent.</p>
<p>Next up, User Agents and CT enforcers included <a target="_blank" href="https://youtu.be/TM7UTzghDQI">Mustafa Emre Acer</a> presenting how Chrome decides which CT logs qualify to be in their log list, and <a target="_blank" href="https://youtu.be/TcUQJO8UBfU">Roger Ng</a> discussing CT enforcement in Android 16.</p>
<p><a target="_blank" href="https://youtu.be/6fKRsLagk3E">Andrew Ayer</a>, who has been keeping log operators honest for many years now, talked about patterns seen in the various CT log failures he has detected as a log monitor. Certificate monitoring was covered by Program Committee Chair <a target="_blank" href="https://youtu.be/eCMKjWDNHTQ">Rasmus Dahlberg</a> in his talk about filtering out the noise of regular legitimate certificate issuance notifications as certificate lifetimes decrease. Then in a fun twist on monitoring, <a target="_blank" href="https://youtu.be/j43xzM18hyA">Bhushan Lokhande</a> talked about using CT log data for a different purpose - to inspect brand logos and trademarks. His talk demonstrated that CT logs expose previously unseen but potentially interesting datasets.</p>
<p>To complete the CT ecosystem, and after a last minute unexpected circumstance that meant that he was unable to attend the summit in person, Program Committee Chair <a target="_blank" href="https://youtu.be/gfrXTgmbS1s">Martin Hutchinson</a> dialled in remotely to contribute his talk about Verifiable Indexes - the key piece still missing from the CT ecosystem. Once in place, the CT ecosystem will be what it was originally envisaged to be.</p>
<p>After talking to us about <a target="_blank" href="https://youtu.be/GhsSpCAJSnY">things going bad</a> in the world of CT, Joe DeBlasio then encouraged us to look forwards by presenting <a target="_blank" href="https://youtu.be/uSP9uT_wBDw">Merkle Tree Certificates</a> (MTCs) - an early internet draft coauthored by many of the big players in the existing CT community, to plan a redesign for how CT will function in a post-quantum future. Certificate transparency is the longest established and most stable transparency ecosystem there is, and as such, drastic design changes rarely happen. However, as we look forward to what will eventually become a post-quantum world, CT will need to adapt along with everything else.</p>
<h2 id="heading-transparency-applications-left-right-and-centre"><strong>Transparency applications left, right and centre</strong></h2>
<p>For me personally, the second and third days of the summit were extremely special. As a former participant in the CT ecosystem and community, I left the transparency community in 2022 when non-CT transparency applications were few and far between, and those that did exist were just beginning to feel their way through what shape a transparency ecosystem should take. Having returned to the community only a few months ago, hearing talks on such a breadth of transparency applications fully opened my eyes to how far the community and technology has come since then.</p>
<p>To begin, our host in Gothenburg, <a target="_blank" href="https://youtu.be/-5HtYd4O7vE">Fredrik Strömberg</a>, talked about his work on hardware transparency for the Tilitis TKey and HSM, and gave a demo (everyone loves a demo!) <a target="_blank" href="https://youtu.be/W_eT0gCRE24">Adit Sachde</a> followed this to discuss how confidential computing can protect virtual machines in the cloud, and how this could benefit the all important witness network as it begins to scale, by opening up the option for witnesses to be run in the cloud.</p>
<p>In the Website Transparency portion of the day, <a target="_blank" href="https://youtu.be/4VDGEVtdCAE">Michael Rosenberg, Dennis Jackson</a> and <a target="_blank" href="https://youtu.be/8ZJqV_zog7s">Giulio Berra</a> discussed the existing issues with cryptography in web applications, and presented ideas for how to bring properties such as integrity, consistency and transparency to web-based experiences.</p>
<p>Our next topic was package management, with <a target="_blank" href="https://youtu.be/V9G6MxRifDg">Mechiel Lukkien</a> taking us through how gopherwatch monitors Go modules, <a target="_blank" href="https://youtu.be/-L55qpcIdus">Hayden Blauzvern</a> discussing the software supply chain and transparency for package registries in general, and <a target="_blank" href="https://youtu.be/YN1XiflsK2w">Holger Levsen</a> summarising the state of reproducible builds, and inviting the community to collaborate on bringing transparency into associated projects.</p>
<p>After lunch on the second day of the summit, we had one of the more whimsical moments of the conference. A life-sized cardboard cutout of Martin Hutchinson (our Program Committee Chair who was unable to attend last minute) arrived, and, much to everyone’s delight, it was established that cardboard Martin would be joining us all for dinner that night.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763985436747/5e971c90-fa20-4fad-b349-bdedebfdb6d0.jpeg" alt class="image--center mx-auto" /></p>
<p>After this brief comical interlude, <a target="_blank" href="https://youtu.be/RVePsCd1Ul4">Alexis Hancock</a> discussed the very real impact transparency could have on accountability within Digital ID systems that are being created by governments all over the world.</p>
<p>This was followed by talks on Binary/Firmware Transparency, including an entertaining talk and demo by <a target="_blank" href="https://youtu.be/Srv2R7gZm6A">Andrea Barisani and Daniele Bianco</a> on implementing transparency in UEFI boot managers, <a target="_blank" href="https://youtu.be/ZF4kLvm0k_o">Tom Binder</a> discussing binary transparency based on signed endorsements - something Google has tailored to their use case of managing the boundary between open-source and internal repositories at Google, <a target="_blank" href="https://youtu.be/TZMspOgnr_A">Billy Lau</a> bringing us up to speed on Android’s Firmware Binary Transparency, and an energetic talk from <a target="_blank" href="https://youtu.be/zIanFo7l9No">Tiziano Santoro</a> on distributed Content Addressable Storage.</p>
<p>To complete the wider transparency topics set, on the final day of the summit <a target="_blank" href="https://youtu.be/GKn0wqwQHCw">Melissa Chase</a> gave a wonderful overview of Key Transparency (KT), with <a target="_blank" href="https://youtu.be/71Vos6bNhI0">Brendan McMillion</a> following it up with why the KT protocol being standardised by the IETF helps remove the need for an independent and trustworthy third party to help operate the system, and <a target="_blank" href="https://youtu.be/pTWs1R2iCRk">Elena Pagnin</a> presenting a novel idea for achieving split-view protection in centralised transparency logs.</p>
<p>My expertise on these topics is limited, so I have held back from going further in summarising the content of these talks for fear of misrepresenting them. However, I strongly encourage you to check out each and every talk on the <a target="_blank" href="https://www.youtube.com/@transparency-dev">transparency.dev youtube channel</a>.</p>
<h2 id="heading-building-together"><strong>Building together</strong></h2>
<p>While all of the talk content at the summit was top notch, it was the moments in and around the talks that really captured the spirit of the community. People were discussing ideas, collaborations and future work during every single break, every social dinner, every possible moment available. They couldn’t get enough of it.</p>
<p>Over the three days, we held two slots for <a target="_blank" href="https://youtu.be/pQYa3IwSPLU">breakout</a> <a target="_blank" href="https://youtu.be/ojX8mHoijsU">sessions</a>, during which attendees came together in groups to discuss whatever topics they felt would be most beneficial to discuss in person. Topics included witnessing, MTCs, CT, transparency for web applications, Tilitis hardware, KT, and reproducible builds. These turned out to be some of the most popular sessions of the whole event, with everyone grumbling when encouraged back to the main room to move on to the next session. In our community, people thrive when given the opportunity to work together.</p>
<p>In Filippo’s keynote, he emphasised that ‘the hard part is the part that you can’t build alone.’  He meant it in terms of software - for most systems you just build the thing and put it out there, but in transparency systems you have to have third parties involved for the system to work. But I think that with that quote he has also stumbled upon the foundation of what makes this community great.  Everyone here understands and accepts that, for transparency to work, we can’t do it alone.  And although the hard part may be the part that you can’t build alone, with an incredible community of brilliant individuals all pulling together, the hard part becomes a hell of a lot easier.</p>
<p><em>A huge thank you to the hosts of the</em> <a target="_blank" href="http://Transparency.dev"><em>Transparency.dev</em></a> <em>Summit 2025, Glasklar Teknik, and our sponsors, Google, Google Chrome, Chalmers University of Technology, the University of Gothenburg and Trail of Bits, without whom the summit could not have happened.</em></p>
]]></content:encoded></item><item><title><![CDATA[Tessera v1.0 is here]]></title><description><![CDATA[The TrustFabric team at Google are thrilled to announce that Tessera, the C2SP tlog-tiles compliant transparency log library, is now generally available!
For over a decade our team has been building and running transparency logs. What began with purp...]]></description><link>https://blog.transparency.dev/tessera-v1-0-is-here</link><guid isPermaLink="true">https://blog.transparency.dev/tessera-v1-0-is-here</guid><category><![CDATA[transparency]]></category><category><![CDATA[tessera]]></category><category><![CDATA[certificate transparency]]></category><category><![CDATA[transparency.dev]]></category><dc:creator><![CDATA[TrustFabric Team]]></dc:creator><pubDate>Mon, 22 Sep 2025 16:04:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/UxPYxMIDcKU/upload/70509f747673f12444db65dba4c7432e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The TrustFabric team at Google are thrilled to announce that <a target="_blank" href="https://github.com/transparency-dev/tessera">Tessera</a>, the <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-tiles.md">C2SP tlog-tiles</a> compliant transparency log library, is now <a target="_blank" href="https://github.com/transparency-dev/tessera/releases/tag/v1.0.0">generally available</a>!</p>
<p>For over a decade our team has been building and running transparency logs. What began with purpose-built Certificate Transparency (CT) logs soon led to the generalisation and broader adoption of these principles by other ecosystems. To encourage and facilitate this adoption we provided a general purpose transparency log implementation, Trillian, which has underpinned ecosystems like CT, Go's SumDB, and Sigstore ever since.</p>
<p>The growth and success of these ecosystems has driven demand for transparency logs that support higher throughput while also being cheaper and easier to operate. Tessera is now ready to meet that demand.</p>
<h2 id="heading-key-features"><strong>Key Features</strong></h2>
<ul>
<li><p><strong>Ease of use:</strong> As a library rather than a microservice architecture, Tessera is built directly into application code. This gives you the freedom to build and deploy your application in the way that makes most sense for you.</p>
</li>
<li><p><strong>Tiles API:</strong> The <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-tiles.md">C2SP tlog-tiles API</a> is central to Tessera, helping you to build transparency-enabled applications that automatically benefit from many of the lessons learned over the last 10 years, and can easily integrate with shared services such as witnessing.</p>
</li>
<li><p><strong>Scalable architecture:</strong> Our <a target="_blank" href="https://github.com/transparency-dev/tessera/blob/main/docs/design/philosophy.md">design philosophy</a> enables Tessera to play to the strengths of the target environment. Writes take an idiomatic approach, and reads are highly cacheable and, in many cases, able to be served directly from the underlying infrastructure.</p>
</li>
<li><p><strong>Multi-environment support:</strong> Logs built with Tessera can run natively in Cloud (GCP or AWS), or on-prem with Vanilla S3+MySQL or POSIX filesystem backends.</p>
</li>
<li><p><strong>Resilience and Availability:</strong> Applications can run multiple instances without compromising log storage, resulting in improved service resilience and availability in the face of frontend issues.</p>
</li>
<li><p><strong>Customizable log personalities:</strong> You can build transparency applications tailored to your specific use-case.</p>
</li>
<li><p><strong>Integrated support for witnessing:</strong> <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-witness.md">C2SP-compliant witnessing</a> is supported out of the box, allowing easy integration with either the growing public witness network, or private witnesses.</p>
</li>
<li><p><strong>Reliable support:</strong> Tessera v1.0 is released with the Google Open Source <a target="_blank" href="https://opensource.google/documentation/policies/library-breaking-change">breaking changes policy</a>, meaning reliable library maintenance and support.</p>
</li>
</ul>
<h2 id="heading-performance"><strong>Performance</strong></h2>
<p>Tessera has proven to be stable at a high-throughput of writes in both our own testing environment, as well as <a target="_blank" href="https://github.com/transparency-dev/tesseract/blob/main/docs/performance.md">long-running instances</a> of our Tessera-based Certificate Transparency log implementation, <a target="_blank" href="https://github.com/transparency-dev/tesseract">TesseraCT</a>.</p>
<p>With write-throughputs in excess of 10,000 entries per second backed by POSIX, and 5,000 entries/s on GCP, Tessera scales to meet most deployment scenarios. More detailed notes are available in the <a target="_blank" href="https://github.com/transparency-dev/tessera/blob/main/docs/performance.md">repo</a>, and in the <a target="_blank" href="https://ipng.ch/s/articles/2025/07/26/certificate-transparency-part-1-tesseract/#:~:text=TesseraCT%3A%20Loadtesting%20POSIX">article</a> by IPng discussing their own Tessera-based deployment.</p>
<h2 id="heading-getting-started-with-tessera"><strong>Getting started with Tessera</strong></h2>
<p>Tessera is the path we recommended for building new transparency logs. To <a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main?tab=readme-ov-file#getting-started">get started</a>, check out the <a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main/cmd/conformance#codelab">Codelab</a> for a simple step-by-step guide to setting up your first log, writing a few entries to it, and then inspecting its contents.</p>
<p>Here's what others have to say:</p>
<ul>
<li><p><em>"Tessera has provided a fantastic developer experience while building the next version of Sigstore's signature transparency log. We've been able to scale up easily and reduce complexity in our infrastructure."</em></p>
<p>  <em>-- Sigstore TL</em></p>
</li>
<li><p><em>"Tessera was a breeze to work with! The API is straightforward and the storage backends are easy to import and swap for development and production."<br />  -- Adit Sachde</em></p>
</li>
<li><p><em>"IPng Networks in Switzerland has deployed the first self-hosted Tessera-based static CT log, and reported a successful deployment in the Certificate Transparency community. The underlying Tessera library is blazingly fast compared to alternatives, reaching easily 18K leaves/sec, while the global WebPKI throughput is two orders of magnitude smaller. TesseraCT, built on this library, scales very well, allowing for fine grained configuration and off the shelf tools like nginx to serve the read-path. More details in</em> <a target="_blank" href="https://ipng.ch/s/articles/2025/08/24/certificate-transparency-part-3-operations/"><em>https://ipng.ch/s/articles/2025/08/24/certificate-transparency-part-3-operations/</em></a> <em>."<br />  -- IPng Networks</em></p>
</li>
</ul>
<h2 id="heading-wed-love-to-hear-from-you"><strong>We’d love to hear from you!</strong></h2>
<p>If you have any questions or would like guidance on working with Tessera please reach out to the team by posting questions on the <a target="_blank" href="https://transparency.dev/slack/">transparency.dev community slack</a>. We’re here to help, and hope that Tessera 1.0 will be instrumental in improving the verifiability of your systems.</p>
]]></content:encoded></item><item><title><![CDATA[Introducing TesseraCT]]></title><description><![CDATA[We’re excited to introduce TesseraCT, a new tile-based Certificate Transparency (CT) log implementation built on Tessera. TesseraCT is designed to make it easier and more-cost effective to operate CT logs at scale, while maintaining strong performanc...]]></description><link>https://blog.transparency.dev/introducing-tesseract</link><guid isPermaLink="true">https://blog.transparency.dev/introducing-tesseract</guid><category><![CDATA[certificate transparency]]></category><category><![CDATA[tesseract]]></category><category><![CDATA[tessera]]></category><dc:creator><![CDATA[TrustFabric Team]]></dc:creator><pubDate>Thu, 14 Aug 2025 17:01:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/cZx4tDpIwPA/upload/00a72c437c168770e239693a08a729c8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’re excited to introduce <a target="_blank" href="https://github.com/transparency-dev/tesseract">TesseraCT</a>, a new tile-based Certificate Transparency (CT) log implementation built on <a target="_blank" href="https://blog.transparency.dev/tessera-bigger-and-beta">Tessera</a>. TesseraCT is designed to make it easier and more-cost effective to operate CT logs at scale, while maintaining strong performance and reliability.</p>
<p>At this stage we are releasing the alpha of TesseraCT, ready for early testing and use by others.</p>
<h2 id="heading-what-is-tesseract">What is TesseraCT?</h2>
<p>TesseraCT is a CT server that implements the <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">Static CT API</a>. It is built on top of <a target="_blank" href="https://github.com/transparency-dev/tessera">Tessera</a>, our Go library for tile-based transparency logs.</p>
<p>Historically, operating CT logs has been complex and costly, but TesseraCT changes that by providing a modern CT implementation that incorporates lessons learned from operating real-world logs at scale.</p>
<h3 id="heading-key-features">Key features</h3>
<ul>
<li><p><strong>Tile-based architecture and API</strong>: Enables efficient reads and writes, following the <a target="_blank" href="https://c2sp.org/static-ct-api">Static CT API C2SP specification</a>.</p>
</li>
<li><p><strong>Cloud-native &amp; on-prem</strong>: Supports AWS, Google Cloud, POSIX filesystems, or vanilla S3/MySQL storage systems.</p>
</li>
<li><p><strong>High uptime</strong>: Engineered to offer high availability, as required by user agents. While TesseraCT can run as a single instance, it will also safely run with multiple concurrent instances for higher availability.</p>
</li>
<li><p><strong>Low operational overhead</strong>: Fewer moving parts and easier setup than <a target="_blank" href="https://github.com/google/trillian">Trillian</a>+<a target="_blank" href="https://github.com/google/certificate-transparency-go">CTFE</a>. At its simplest, TesseraCT requires only a single server rather than three, and is simpler to configure.</p>
</li>
<li><p><strong>Future-proof</strong>: Designed for the highest level of availability, and to scale easily as certificate lifetimes get shorter and the volume of certificates issued increases.</p>
</li>
<li><p><strong>Fast merging:</strong> Entries are integrated into the log within seconds, with the option to wait for submissions to be fully integrated before client requests return.</p>
</li>
</ul>
<h3 id="heading-where-can-it-run">Where can it run?</h3>
<p>TesseraCT can be run on Google Cloud Platform (GCP), Amazon Web Services (AWS), POSIX filesystems, or using vanilla S3+MySQL storage systems. We have tested each of these implementations using various deployment setups, and have confirmed that they are able to handle the current certificate submission rate seen by existing logs, of roughly <a target="_blank" href="https://ct.cloudflare.com/">300</a> certificates per second. At the higher end, our staging GCP logs happily accept up to 1.7k QPS, and a local POSIX deployment was tested up to 10k QPS! More details and reproducible <a target="_blank" href="https://github.com/transparency-dev/tesseract/blob/main/docs/performance.md">performance tests</a> are available in the TesseraCT repo.</p>
<p>We recommend you choose which platform to use with TesseraCT based on your existing production environment:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755189703774/40f3fb48-5900-4275-975f-3f32a56e1574.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-tile-based-logs">Tile-Based Logs</h2>
<p><a target="_blank" href="https://transparency.dev/articles/tile-based-logs/">Tile-based logs</a> became popular in the CT world due to the caching benefits they provide, making read requests to the log simpler and cheaper to respond to. Even <a target="_blank" href="https://github.com/google/trillian">Trillian</a>, used by our <a target="_blank" href="https://github.com/google/certificate-transparency-go">existing RFC 6962 CT log implementation</a>, relies on a tiled architecture under the hood. However, TesseraCT directly exposes <a target="_blank" href="https://c2sp.org/">C2SP</a> compliant tiles through the <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">static CT API</a>.</p>
<p>In contrast to the microservices-based approach of Trillian and the CTFE, TesseraCT offers a fresh architecture with a flexible, and simplified deployment model. This is a transformative step for CT log infrastructure, for existing and new log operators.</p>
<p>TesseraCT builds on the heritage of Trillian+CTFE, and was designed after much discussion with current CT log operators. It is just one of a growing number of CT log implementations that conform to the static CT API specification, including the trailblazing <a target="_blank" href="https://sunlight.dev/">Sunlight</a>, <a target="_blank" href="https://github.com/cloudflare/azul">Azul</a>, <a target="_blank" href="https://github.com/aditsachde/itko">Itko</a> and <a target="_blank" href="https://github.com/Barre/compact_log">CompactLog</a>. Having multiple log implementations provides diversity and resilience, and makes it possible to run static CT API logs on a variety of platforms, resulting in a healthier Certificate Transparency ecosystem overall.</p>
<h2 id="heading-try-out-tesseract">Try out TesseraCT!</h2>
<p>We’ve launched the alpha release of TesseraCT so that developers, log operators, and the CT ecosystem can test it as early as possible and provide feedback to be incorporated into future releases.</p>
<p>There are three TesseraCT <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#test_tube-public-test-instances">staging logs</a> available for testing:</p>
<ul>
<li><p><strong>arche2025h1</strong> at https://arche2025h1.staging.ct.transparency.dev</p>
</li>
<li><p><strong>arche2025h2</strong> at https://arche2025h2.staging.ct.transparency.dev</p>
</li>
<li><p><strong>arche2026h1</strong> at https://arche2026h1.staging.ct.transparency.dev</p>
</li>
</ul>
<p><strong>CT log operators</strong>: Try out the <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#test_tube-public-test-instances">Arche logs</a>, or run your own! See our <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#getting-started">Getting Started</a> guide and <a target="_blank" href="https://github.com/transparency-dev/tesseract/blob/main/docs/architecture.md">architecture overview</a> for guidance on how. Give TesseraCT a try and help us answer the most important question: does this reduce your pain?</p>
<p><strong>CT monitors</strong>: The <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#test_tube-public-test-instances">Arche logs</a> are now available for monitoring. Let us know if you run into any problems querying the logs.</p>
<p><strong>Certificate Authorities</strong>: Submit certificates to our <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#test_tube-public-test-instances">Arche logs</a>.</p>
<p>We’re excited to share this milestone and we welcome your <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#wave-contact">feedback</a>. TesseraCT is about making it less painful to operate transparency infrastructure, so try the alpha and tell us what hurts. Let’s make the next generation of transparency logs better, together.</p>
<p>Keep an eye on the <a target="_blank" href="https://github.com/transparency-dev/tesseract">TesseraCT repo</a> to follow along, and reach out with any questions or feedback on the <a target="_blank" href="https://transparency.dev/slack/">transparency.dev slack</a>. We look forward to hearing from you!</p>
<h2 id="heading-whats-next">What’s Next?</h2>
<p>We are continuing to push TesseraCT's development towards the goal of production tile-based CT logs. Future versions of TesseraCT will include additional features such as <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-witness.md">witnessing</a> and a root update process integrated with <a target="_blank" href="https://www.ccadb.org/">CCADB</a>. Check out our <a target="_blank" href="https://github.com/transparency-dev/tesseract?tab=readme-ov-file#motorway-roadmap">roadmap</a> for more information.</p>
]]></content:encoded></item><item><title><![CDATA[Tessera: Bigger and Beta]]></title><description><![CDATA[Tessera Beta is here!
Tessera is a tile-based transparency log library designed to improve the experience of log-operators running logs in any transparency space. After announcing the alpha release of Trillian Tessera late last year, we are excited t...]]></description><link>https://blog.transparency.dev/tessera-bigger-and-beta</link><guid isPermaLink="true">https://blog.transparency.dev/tessera-bigger-and-beta</guid><category><![CDATA[tessera]]></category><category><![CDATA[trillian]]></category><dc:creator><![CDATA[TrustFabric Team]]></dc:creator><pubDate>Thu, 12 Jun 2025 10:12:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/gREi-9tI5Mg/upload/de9cd6639bc0bff4e90f1956faa2beee.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Tessera Beta is here!</p>
<p><a target="_blank" href="https://github.com/transparency-dev/tessera">Tessera</a> is a <a target="_blank" href="https://transparency.dev/articles/tile-based-logs/">tile-based transparency log</a> library designed to improve the experience of log-operators running logs in any transparency space. After <a target="_blank" href="https://blog.transparency.dev/announcing-the-alpha-release-of-trillian-tessera">announcing</a> the alpha release of Trillian Tessera late last year, we are excited to present the latest development milestone - the beta release of Tessera - which takes us one step closer to that goal!</p>
<h2 id="heading-from-alpha-to-beta-whats-changed"><strong>From Alpha to Beta: What’s Changed?</strong></h2>
<p>As with any beta release, the objective was to produce a version of Tessera ready for external users to test and provide feedback on, so that we can go on to produce a production-ready version of Tessera.  With that in mind, the <a target="_blank" href="https://github.com/transparency-dev/tessera/releases">changes</a> between the alpha and beta releases improve the usability of the API, the stability of the codebase, alongside adding a few new features and tools. Highlights include:</p>
<ul>
<li><p><strong>Snappier name:</strong> We’ve dropped the ‘Trillian’ prefix, it’s now simply ‘Tessera’. Although inspired by lessons learned from running Trillian, Tessera is a project in its own right, now quite different from Trillian.  It’s time for it to leave the Trillian nest and independently find its own way in the world. Plus, it’s shorter, and doesn't contain a hyphen, so less typing, and no aliasing required when importing the Go package.</p>
</li>
<li><p><strong>API Improvements:</strong> The Tessera API is more intuitive and streamlined, with types and functions that are unnecessary to expose to users of the codebase hidden away.</p>
</li>
<li><p><strong>Expanded anti-spam support:</strong> Anti-spam support for AWS and POSIX has been added alongside the previously existing GCP support.</p>
</li>
<li><p><strong>New tools:</strong> We’ve added an <code>fsck</code> tool for checking the self-consistency and integrity of tlog-tiles logs via HTTP. Additionally, an experimental mirroring tool has been introduced, initially supporting local filesystem destinations.</p>
</li>
<li><p><strong>Cleaner docs:</strong> Documentation improvements and updates ensure clarity and accuracy for users.</p>
</li>
<li><p><strong>Bug fixes:</strong> We’ve addressed a number of bugs, including fixes for missing errors, inconsistencies in function names, and issues with checkpoint publishing in GCP and AWS.</p>
</li>
</ul>
<h2 id="heading-its-testing-time"><strong>It’s Testing Time</strong></h2>
<p>The latest improvements to Tessera refine the developer experience to the point that it is now ready for beta-level testing.  So far, we’ve validated the functionality of Tessera by successfully running a log on GCP, scaling to over 2 billion entries.  In parallel, we’ve also run a log on AWS, growing it to 800 million entries, demonstrating reliable performance across both platforms.</p>
<p>We invite all existing log operators, along with anyone else with an interest in improving and supporting transparency ecosystems, to try spinning up a log using the beta version of Tessera, and let us know how it goes!  Feedback on your experience is invaluable, as hearing from external users about how Tessera fares when used for real applications helps us ensure that the upcoming production release is stable and meets your needs and expectations.</p>
<h2 id="heading-how-to-get-involved"><strong>How to Get Involved</strong></h2>
<p>Tessera is designed to make it easy to build and deploy new transparency logs on supported infrastructure. To get started, check out the <a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main/cmd/conformance#codelab">Codelab</a>, which provides a simple step-by-step guide to set up your first log, write a few entries to it, and then inspect its contents.</p>
<p>To start experimenting with Tessera you can choose one of these example personalities:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main/cmd/conformance/posix#bring-up-a-log">POSIX</a>: example of operating a log backed by a local filesystem. This example runs an HTTP web server that takes arbitrary data and adds it to a file-based log.</p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main/cmd/conformance/mysql#bring-up-a-log">MySQL</a>: example of operating a log that uses MySQL. This example is most easily deployed via docker compose, allowing for easy setup and teardown</p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main/deployment/live/gcp/conformance#manual-deployment">GCP</a>: example of operating a log running on GCP. This example can be deployed via terraform.</p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main/deployment/live/aws/codelab">AWS</a>: example of operating a log running on AWS. This example can be deployed via terraform.</p>
</li>
</ul>
<p>In addition, keep an eye on the <a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main?tab=readme-ov-file#getting-started">Getting Started</a> section in the <a target="_blank" href="https://github.com/transparency-dev/tessera/tree/main?tab=readme-ov-file#tessera">Tessera README</a> for future example personalities.</p>
<p>For any questions or guidance on working with Tessera please reach out to the team by posting questions on the <a target="_blank" href="https://transparency.dev/slack/">transparency.dev community slack</a>. We can’t wait to hear what you think!</p>
]]></content:encoded></item><item><title><![CDATA[Transparency.dev Summit 2025 CFP is now open!]]></title><description><![CDATA[We are excited to announce that the Call for Papers/Proposals/Presentations for the Transparency.dev Summit 2025 is now open!
Earlier this year we announced the second annual Transparency.dev Summit, to be hosted by Glasklar Teknik on 20th-22nd Octob...]]></description><link>https://blog.transparency.dev/transparencydev-summit-2025-cfp-is-now-open</link><guid isPermaLink="true">https://blog.transparency.dev/transparencydev-summit-2025-cfp-is-now-open</guid><category><![CDATA[transparency.dev]]></category><category><![CDATA[certificate transparency]]></category><dc:creator><![CDATA[Katrina Joyce]]></dc:creator><pubDate>Mon, 02 Jun 2025 16:32:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/82TpEld0_e4/upload/910165c410f6747c21ce13157f278bfc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are excited to announce that the Call for Papers/Proposals/Presentations for the <a target="_blank" href="https://transparency.dev/summit2025/">Transparency.dev Summit 2025</a> is now open!</p>
<p>Earlier this year we <a target="_blank" href="https://blog.transparency.dev/announcing-transparencydev-summit-2025">announced</a> the second annual Transparency.dev Summit, to be hosted by <a target="_blank" href="https://www.glasklarteknik.se/">Glasklar Teknik</a> on 20th-22nd October 2025 in Gothenburg, Sweden. Led by Rasmus Dahlberg and Martin Hutchinson, this year’s program committee would like to welcome you to submit proposals for talks and presentations for the summit.</p>
<p>👉 <a target="_blank" href="https://forms.gle/qtftQgbd3wmfqLkY6">Submit your proposal here</a></p>
<p>If you have a talk, presentation or idea you'd like to share at the summit, we'd love to hear it. Submissions are welcomed from first-time speakers, well established members of the transparency community, and everyone in between!</p>
<p>The official deadline is August 31st, but decisions will be being made and presentation proposals will be being accepted as they come in, so submit ideas sooner rather than later. We'd hate for the event to miss out on a brilliant topic because the schedule is already full!</p>
<p>We’re looking forward to seeing what you come up with!</p>
]]></content:encoded></item><item><title><![CDATA[Hardening Witnesses with Confidential Computing]]></title><description><![CDATA[Witnesses verify that the verifiable logs they are observing maintain their append-only property and countersign verified checkpoints. Clients can use these countersigned checkpoints to protect themselves against split-views, an attack where a log pr...]]></description><link>https://blog.transparency.dev/hardening-witnesses-with-confidential-computing</link><guid isPermaLink="true">https://blog.transparency.dev/hardening-witnesses-with-confidential-computing</guid><category><![CDATA[Confidential Computing]]></category><category><![CDATA[armoredwitness]]></category><dc:creator><![CDATA[Adit Sachde]]></dc:creator><pubDate>Wed, 30 Apr 2025 04:00:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/kYlYwQze5vI/upload/87aae3ebc5fd4542d1fa676a80c1dc0c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Witnesses verify that the verifiable logs they are observing maintain their append-only property and countersign verified checkpoints. Clients can use these countersigned checkpoints to protect themselves against split-views, an attack where a log presents inconsistent checkpoints to different clients.</p>
<p>Detecting witness private key misuse is a much harder problem than detecting private key misuse by log. Inclusion proofs, such as signed certificate timestamps, can be cross checked against the current state of the log, but witnesses do not store a history of the checkpoints they signed. To protect against private key material leaks, the <a target="_blank" href="https://github.com/transparency-dev/armored-witness">armored witness</a> project uses custom hardware.</p>
<p>Confidential computing makes it possible to provide similar guarantees for witnesses running in the cloud. The <a target="_blank" href="https://github.com/aditsachde/confidential-witness/tree/main?tab=readme-ov-file">confidential witness</a> project provides an implementation for running a witness on GCP with public audit proofs. A running witness can be found at <code>buoyant-breeze.itko.dev</code> with audit logs found in <a target="_blank" href="https://github.com/aditsachde/confidential-witness-audits/tree/main/buoyant-breeze">this repository</a>.</p>
<h3 id="heading-what-is-confidential-computing">What is confidential computing?</h3>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcingjS98qdxr9f39OJm4Eb9oeICkQmkLohUuXDsn7hu9BoAIadw1A0VZpyIE8_af1KoQwFygHOL8a_uOcrA468uwWiXBFnDvjiO2Zk9NKXU9GwKmNU2_n8sApuuCgjE6u4e43ftA?key=EoUXhfJXGN6cg7kaxp3VavGD" alt /></p>
<p>Confidential computing aims to provide a guarantee to a remote client that the server is running a specific version of software, relying on support from cloud hardware and software. In our case, GCP provides the Secure Space image. </p>
<p>This image uses AMD’s Secure Encrypted Virtualization technology to receive a signed attestation that it is isolated from the Compute Engine hypervisor and that it has not been tampered with. Next, it launches the container specified by the operator and injects an IAM token into the container with these attestations. The operator’s software can use this token with remote services to provide the remote service with details about the currently running version of software.</p>
<p>Confidential computing shifts the trust boundaries of traditional applications. Instead of simply assuming that the service operator is running a correct version of software, we can rely on attestations generated by AMD and GCP to know that the service operator is running the correct version of software. We still have to trust someone, but it is likely that we trust the operational practices of AMD and GCP more than we trust those of the witness operators.</p>
<h3 id="heading-trust-flow-of-a-confidential-witness">Trust flow of a confidential witness</h3>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc2NZrcwZG0bnl5fLH9ewbdgCmLFt6JtCV2yS86lWP7QHu6vPj-D8D8SUEtRhVjqO4wP7YBScNJ1so9hksaDwUGu5LoF0UonNaEIPLy81VzOJdtex65V5UhAMNAOx_xI5kpgmJf4A?key=EoUXhfJXGN6cg7kaxp3VavGD" alt /></p>
<p>The confidential witness software runs within the secure space without any state. The private key material is generated by and stored in GCP’s Key Management Service, ensuring that the operator does not have a copy unbound by the access policies, and other state is stored in a Cloud Storage bucket, both of which are protected with a set of IAM rules.</p>
<pre><code>  attribute_condition = 
    <span class="hljs-string">'STABLE'</span> <span class="hljs-keyword">in</span> assertion.submods.confidential_space.support_attributes &amp;&amp;
    assertion.swname==<span class="hljs-string">'CONFIDENTIAL_SPACE'</span> &amp;&amp;
    assertion.submods.container.image_reference==<span class="hljs-string">'${var.shasum}'</span> &amp;&amp;
    assertion.submods.gce.project_id==<span class="hljs-string">'${var.project_id}'</span> &amp;&amp;
    assertion.submods.container.env.WITNESS_KEY==<span class="hljs-string">'${local.witness_key}'</span> &amp;&amp;
    assertion.submods.container.env.WITNESS_NAME==<span class="hljs-string">'${local.witness_name}'</span> &amp;&amp;
    assertion.submods.container.env.WITNESS_AUDIENCE==<span class="hljs-string">'${local.witness_audience}'</span> &amp;&amp;
    <span class="hljs-string">'${google_service_account.witness_compute_engine.email}'</span> <span class="hljs-keyword">in</span> assertion.google_service_accounts
</code></pre><p>One of the attributes provided in the IAM token is the hash of the container image. This attribute allows us to strongly link the image to a specific Github Actions workflow and the specific hash of the upstream confidential witness repository using Cosign. We can now audit the source code that went into building the container, providing an opportunity to ensure that there is nothing in the source that will allow the witness to maliciously sign checkpoints.</p>
<p>On the other side, we also need to make sure that the IAM rules for the cloud storage bucket and KMS are set up correctly to only accept requests from a container running an image with a hash that is known to be good.</p>
<p>Here, we rely on a <a target="_blank" href="https://github.com/aditsachde/confidential-witness-audits/blob/main/.github/workflows/audit.yml">Github actions workflow</a> that calls the GCP APIs to fetch audit logs and configuration information, and then publish the data to a git repository. By using Github Actions instead of relying on the operator to publish the information, we can ensure that the data has not been tampered with prior to publishing.</p>
<h3 id="heading-putting-it-all-together">Putting it all together</h3>
<p>Let’s walk through using these tools to verify the source of the Buoyant Breeze witness.</p>
<p>1. Look at <a target="_blank" href="https://github.com/aditsachde/confidential-witness-audits/blob/main/buoyant-breeze/attestation_verifier_provider.json">the audit logs</a> to find what version of software is allowed to access the private key for the witness stored in GCP’s Key Management Service.</p>
<pre><code>ghcr.io/aditsachde/confidential-witness@sha256:b634433ac01a0f43c05bbeb257044b990fee51a3128a5a5310192a5bddc9bc2d
</code></pre><p>2. Use the github cli to verify the attestation. Here, we can also specify the source commit, which can be looked up at <a target="_blank" href="https://github.com/aditsachde/confidential-witness/attestations">https://github.com/aditsachde/confidential-witness/attestations</a>.</p>
<pre><code>gh attestation verify oci:<span class="hljs-comment">//ghcr.io/aditsachde/confidential-witness@sha256:b634433ac01a0f43c05bbeb257044b990fee51a3128a5a5310192a5bddc9bc2d --repo aditsachde/confidential-witness --source-digest 5b89c70e59060b51763219e8315cd9c2b4c45f1c</span>
</code></pre><p>3. Audit <a target="_blank" href="https://github.com/aditsachde/confidential-witness/tree/5b89c70e59060b51763219e8315cd9c2b4c45f1c">the source code</a> to ensure that this container will only sign properly verified checkpoints.</p>
<p>4. Use the audit logs to verify that the operator has configured everything in the project correctly and to our expectations, so that there are no other principals authorized to access private key material.</p>
<h3 id="heading-confidential-witness">Confidential Witness</h3>
<p>The confidential witness project provides an implementation of this concept, adapting <a target="_blank" href="https://github.com/transparency-dev/witness/tree/main/cmd/omniwitness">omniwitness</a> to run in a GCP confidential space with publicly published audit logs. The implementation is still a prototype, and not yet ready for production, but shows how these tools can be applied to the witness ecosystem. Interested in running your own witness or getting involved in the project? Join the community on the <a target="_blank" href="https://join.slack.com/t/transparency-dev/shared_invite/zt-27pkqo21d-okUFhur7YZ0rFoJVIOPznQ">transparency.dev Slack</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Merklemap: Scaling Certificate Transparency with 100B+ Rows of Data]]></title><description><![CDATA[Merklemap is a certificate transparency monitor that not only continuously fetches the latest data from certificate transparency logs in real-time, but also stores that data for search, archival, and statistical purposes. While certificate transparen...]]></description><link>https://blog.transparency.dev/merklemap-scaling-certificate-transparency-with-100b-rows-of-data</link><guid isPermaLink="true">https://blog.transparency.dev/merklemap-scaling-certificate-transparency-with-100b-rows-of-data</guid><category><![CDATA[merklemap]]></category><category><![CDATA[certificate transparency]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[zfs]]></category><dc:creator><![CDATA[Pierre Barre]]></dc:creator><pubDate>Wed, 23 Apr 2025 11:41:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/ZiQkhI7417A/upload/720a8500f3175f8a2a3a792ffee66205.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://www.merklemap.com/">Merklemap</a> is a certificate transparency monitor that not only continuously fetches the latest data from certificate transparency logs in real-time, but also stores that data for search, archival, and statistical purposes. While certificate transparency (CT) logs are heavily sharded across different logs, we wanted to gather all this data into a single source of truth that can be indexed, queried and searched. In this post, we dive into the architecture and technical choices that enable Merklemap to operate at CT scale.</p>
<h2 id="heading-postgres-and-zfs">Postgres and ZFS</h2>
<p>While many distributed databases boast large-scale capabilities, we chose PostgreSQL for its proven performance, our operational experience, and because alternatives offered no clear benefit for our needs. With today's hardware capabilities, we were confident that we could fit all certificate transparency data in a single shard for the foreseeable future, and possibly indefinitely. The database we currently operate contains approximately <strong>20TB of storage</strong> and over <strong>100 billion unique rows</strong>.</p>
<p>We run two replicas for the core database with AMD EPYC 9454Ps, 6NVMe drives and 1TB of memory in each replica.</p>
<p>We chose ZFS as our storage solution for PostgreSQL to leverage its compression, snapshots, performance characteristics, and send/receive capabilities. ZFS's default recordsize of 128K serves as an excellent baseline for maintaining optimal performance in terms of compression ratio, sequential scans, and system overhead.</p>
<p>However, PostgreSQL's default page size of 8K presented a challenge: we would experience significant read/write amplification. For each page operation, we'd need to read or write an entire 128K record, resulting in a 16x overhead for many operations.</p>
<p>Traditionally, database administrators are advised to reduce the ZFS recordsize to match or approximate PostgreSQL's page size. This approach, however, would result in very small ZFS records (8K or 16K), substantially increasing global overhead and nearly eliminating the compression benefits, one of the primary advantages that make ZFS attractive in the first place.</p>
<p>Instead, we took the opposite approach: increasing PostgreSQL's block size to get closer to ZFS's default recordsize. Postgres maximum blocksize is 32L, so we will use that.</p>
<p>This setup is not difficult to achieve and can be done using the "configure" script:</p>
<p><code>./configure --with-blocksize=32 [other-options]</code></p>
<h2 id="heading-optimizing-for-append-only-operations">Optimizing for Append-Only Operations</h2>
<p>PostgreSQL's VACUUM process handles multiple critical "cleanup" operations, including recycling dead tuples, updating statistics, and preventing transaction ID wraparound.</p>
<p>We designed our schema to be as "append-only" as possible. This design choice significantly reduces the workload on VACUUM operations since fewer updates and deletes mean fewer dead tuples to clean up. We configured autovacuum to be very aggressive with the following settings:</p>
<p><code>- autovacuum_max_workers to the number of merklemap tables</code></p>
<p><code>- autovacuum_naptime 10s</code></p>
<p><code>- autovacuum_vacuum_cost_delay 1ms</code></p>
<p><code>- autovacuum_vacuum_cost_limit 2000</code></p>
<h2 id="heading-handling-transaction-id-wraparound">Handling Transaction ID Wraparound</h2>
<p>Despite our aggressive autovacuum configuration, sometimes the system would still become I/O constrained due to the high rate of data ingestion. This constraint particularly affects transaction ID wraparound issues.</p>
<p>According to the PostgreSQL <a target="_blank" href="https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND">VACUUM-FOR-WRAPAROUND documentation</a>: “<em>…since transaction IDs have limited size (32 bits) a cluster that runs for a long time (more than 4 billion transactions) would suffer transaction ID wraparound: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their output become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you cannot get at it.) To avoid this, it is necessary to vacuum every table in every database at least once every two billion transactions.</em>”</p>
<p>We proactively monitor database age metrics continuously. When predetermined thresholds are crossed, we implement a dynamic throttling mechanism: reducing ingestion rate, prioritizing newer logs, and once autovacuum catches up we ramp up ingestion again. . Once stable, ingestion ramps up again. Our append-only schema design significantly helps this process by minimizing the workload for autovacuum operations.</p>
<h2 id="heading-relation-less">Relation-less?</h2>
<p>A frequent recommendation for scaling write operations in PostgreSQL is to use the COPY command, minimize relations and constraints, and essentially "just pipe the data in." While this advice is common, it doesn't represent the optimal approach for every use case.</p>
<h2 id="heading-prioritizing-data-integrity">Prioritizing Data Integrity</h2>
<p>When developing Merklemap, one of our primary motivations for choosing PostgreSQL was to leverage its robust constraint enforcement and ACID properties. Consequently, we designed our schema as a well-normalized relational model rather than optimizing solely for insertion efficiency.</p>
<p>This design decision does necessitate significantly more read operations and database commits, but we consider this performance trade-off worthwhile. The benefits of data integrity, consistency, and the ability to enforce complex relationships outweigh the potential write performance gains of a denormalized approach.</p>
<h2 id="heading-optimizing-commit-performance">Optimizing Commit Performance</h2>
<p>To mitigate the impact of our high commit frequency, we utilize PostgreSQL's commit grouping capabilities through the combination of commit_delay and commit_siblings parameters. This configuration allows the database to batch multiple commits together, reducing overhead.</p>
<p>At Merklemap, we maintain commit_delay at 100ms and keep commit_siblings at its default value of 5. This setup effectively balances our need for timely transaction completion with the efficiency benefits of grouped commits.Handling Transaction ID Wraparound</p>
<p>Despite our aggressive autovacuum configuration, sometimes the system would still become I/O constrained due to the high rate of data ingestion. This constraint particularly affects transaction ID wraparound issues.</p>
<h3 id="heading-avoiding-partitioning"><strong>Avoiding Partitioning</strong></h3>
<p>While PostgreSQL supports table partitioning, we found its benefits too narrow to justify the complexity. Partitioning can ease deletions, but brings challenges in constraint handling, indexing, foreign key management, and query planning. For us, well-indexed regular tables with thoughtful query discipline proved more efficient and simpler to maintain.</p>
<h3 id="heading-query-discipline-at-scale"><strong>Query Discipline at Scale</strong></h3>
<p>Our general queries execute extremely fast, typically returning results in just a few microseconds. This efficiency stems from our strict adherence to querying only indexed data. We've carefully designed our indexing strategy to support our most common access patterns while balancing the overhead of index size and insert performance impact.</p>
<p>With over 100+ billion rows, casual querying is a non-starter. We strictly query indexed data, use partial, covering and expression indexes, and maintain materialized views for common aggregations. We also enforce query governor policies to prevent expensive scans during peak usage periods.</p>
<h2 id="heading-compression-algorithm">Compression algorithm</h2>
<p>We use lz4 for ZFS compression, as it proved to be computationally negligible for our workload while providing a compression ratio of approximately 2x. We evaluated zstd, but lz4 offered the optimal balance between performance impact and storage savings.</p>
<h2 id="heading-resilient-ingestion-strategy">Resilient Ingestion Strategy</h2>
<p>We follow a "fail-fast, retry-often" approach. While certificate transparency logs are generally reliable, like any remotely accessible service, they can experience downtime or performance issues. Fortunately, since certificates and pre-certificates are replicated across multiple logs, monitoring interruptions don't pose a significant problem.</p>
<p>Our operation resembles that of a traditional search engine, dynamically throttling our queries based on "perceived" load. When we detect that a log is slower than usual or returning errors, we adjust accordingly.</p>
<p>We implement intelligent retry mechanisms that activate when logs return to a healthy state, ensuring we fill any gaps in our data where we previously encountered errors. This approach handles scenarios where specific blocks in the logs consistently fail for a period while other blocks remain accessible.</p>
<h2 id="heading-efficient-ingestion">Efficient Ingestion</h2>
<p>We implement our ingest workers using Rust’s Tokio library. We carefully pool a minimal number of PostgreSQL connections to maintain optimal performance, following best practices outlined in <a target="_blank" href="https://www.enterprisedb.com/postgres-tutorials/why-you-should-use-connection-pooling-when-setting-maxconnections-postgres">EDB's connection pooling recommendations</a>.</p>
<p>With this optimized approach, we only require 5-6 CPU cores to ingest 5-8 million entries per hour.</p>
<h2 id="heading-historical-data-volume-explanation">Historical Data Volume Explanation?</h2>
<p>You might wonder why our database has accumulated such a large volume of data. While the current certificate transparency traffic averages around 1 million certificates per hour, we've been ingesting data available since the inception of certificate transparency.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>At Merklemap, we believe that traditional relational databases like PostgreSQL can still be the right choice for specialized large-scale applications when properly configured and optimized. By housing over 100 billion rows of certificate transparency data in a single PostgreSQL shard, we think we’ve shown that thoughtful architecture decisions can outweigh the immediate appeal of distributed systems in specific use cases.</p>
<p>While our current infrastructure comfortably handles the present certificate transparency ecosystem, we continue to monitor growth trends and optimize our processes. The techniques outlined in this article aren't just applicable to certificate transparency, they represent valuable approaches for any PostgreSQL deployment dealing with high-volume workloads.</p>
]]></content:encoded></item><item><title><![CDATA[Announcing Transparency.dev Summit 2025]]></title><description><![CDATA[We’re excited to announce the 2nd annual Transparency.dev Summit - a 2.5 day gathering of implementers, operators, and clients of real-world transparency systems.
🗓 Save the Date: October 20–22, 2025
📍 Gothenburg, Sweden
🏠 Hosted by Glasklar Tekni...]]></description><link>https://blog.transparency.dev/announcing-transparencydev-summit-2025</link><guid isPermaLink="true">https://blog.transparency.dev/announcing-transparencydev-summit-2025</guid><category><![CDATA[transparency.dev]]></category><category><![CDATA[certificate transparency]]></category><category><![CDATA[mullvad]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Thu, 10 Apr 2025 16:36:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744227681440/d4a5b5fe-efb2-43c5-b40f-39403e4fc7f9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’re excited to announce the 2nd annual Transparency.dev Summit - a 2.5 day gathering of implementers, operators, and clients of real-world transparency systems.</p>
<p><strong>🗓</strong> Save the Date: <strong>October 20–22, 2025</strong></p>
<p><strong>📍 Gothenburg, Sweden</strong></p>
<p><strong>🏠 Hosted by</strong> <a target="_blank" href="https://www.glasklarteknik.se"><strong>Glasklar Teknik</strong></a></p>
<p>The <a target="_blank" href="https://transparency.dev/summit2024/">inaugural</a> <a target="_blank" href="http://Transparency.dev">Transparency.dev</a> <a target="_blank" href="https://transparency.dev/summit2024/">Summit</a> held in October 2024 was a huge success - bringing together over 50 attendees to drive the future direction of transparency ecosystems. Hosted by Google’s TrustFabric team in London UK, the <a target="_blank" href="https://blog.transparency.dev/transparencydev-summit-recap">summit sparked insightful discussions and strong community momentum</a>. Now, the hosting baton passes to another key community member: Glasklar Teknik, who will welcome us to Gothenburg, Sweden for the next chapter.</p>
<p>This year, we’re diving deeper into the infrastructure, operations, and future of transparency. Whether you’re running a transparency log, building new transparency ecosystems, or simply interested in learning more about how transparency builds trust for any digital domain, this summit is for you.</p>
<p>This year’s program committee will be led by Rasmus Dahlberg and Martin Hutchinson, helping shape the content and flow of the summit.</p>
<h2 id="heading-certificate-transparency-ct-day">Certificate Transparency (CT) Day</h2>
<p>Certificate Transparency is a cornerstone of the conference. The CT community is undergoing <a target="_blank" href="https://blog.transparency.dev/what-2025-holds-for-certificate-transparency-and-the-transparencydev-ecosystem">a major evolution towards tiled logs</a> to enhance the robustness of the ecosystem. Transparency.dev summit will feature a <strong>dedicated CT day</strong> with emphasis on:</p>
<ul>
<li><p>Adopting static CT logs for transformative benefits to the ecosystem</p>
</li>
<li><p>Learning about real world challenges faced by log operators and browsers</p>
</li>
<li><p>Tools and monitor solutions that help digest CT logs</p>
</li>
<li><p>Post-Quantum Readiness - how to evolve for a PQ world</p>
</li>
</ul>
<h2 id="heading-central-pillars-of-the-summit">Central Pillars of the Summit</h2>
<p>In addition to CT, the summit will be structured around the three following pillars:</p>
<ul>
<li><p><strong>Applications of Transparency</strong> - We're excited to have talks on existing and emerging use-cases:</p>
<ul>
<li><p><strong>Binary Transparency</strong> - This is a broad umbrella where we focus on 2 aspects:</p>
<ul>
<li><p>Signature transparency logs such as Sigsum and Sigstore.</p>
</li>
<li><p>Package index logs such as GoSumDB.</p>
</li>
</ul>
</li>
<li><p><strong>Key Transparency</strong> - E.g., discussion of production deployments and open challenges.</p>
</li>
<li><p><strong>Other uses</strong> - Please do not feel limited if your application's umbrella is not listed here.</p>
</li>
</ul>
</li>
<li><p><strong>Core Transparency Technology</strong> - Verifiable data structures are at the core of all transparency applications. Implementations of verifiable logs, maps, and adjacent technologies such as witnesses, verifiers, and monitors are for example in scope.</p>
</li>
<li><p><strong>Transparency Future Roadmap</strong> - What’s next for transparency? We’ll explore the roadmap ahead for various transparency technologies and standards.</p>
</li>
</ul>
<h2 id="heading-who-should-attend">Who Should Attend?</h2>
<p>This summit is for:</p>
<ul>
<li><p>People running or building transparency infrastructure</p>
</li>
<li><p>Open source maintainers and security engineers</p>
</li>
<li><p>Academic researchers</p>
</li>
<li><p>Policy folks and ecosystem designers</p>
</li>
<li><p>Anyone passionate about verifiability, auditability, and trust</p>
</li>
</ul>
<h2 id="heading-get-involved">Get Involved</h2>
<p>We want to hear from you! If you’re interested in attending, speaking, or helping shape the agenda:</p>
<p>👉 <a target="_blank" href="https://docs.google.com/forms/d/e/1FAIpQLSfabrC4JnWQCKGjPuHg4-kAua6Fe9wc0259IEn0pLTW1vAeOQ/viewform">Fill out the Interest Survey</a></p>
<p>Transparency.dev Summit 2025 is sponsored by Google and Mullvad VPN. We’re actively welcoming additional sponsors who share our mission to grow and support the Transparency.dev community. If your organization is interested in getting involved, please fill out our <a target="_blank" href="https://docs.google.com/forms/d/e/1FAIpQLSfabrC4JnWQCKGjPuHg4-kAua6Fe9wc0259IEn0pLTW1vAeOQ/viewform">interest survey</a> to request more information.</p>
<p>Stay tuned to our website: <a target="_blank" href="https://transparency.dev/summit/">Transparency.dev Summit</a> for updates on the call-for-papers (CFP) and registration. Also join us in the <a target="_blank" href="https://transparency.dev/slack/">transparency-dev Slack</a> to hear the latest on the summit.</p>
]]></content:encoded></item><item><title><![CDATA[Certificate Transparency in Firefox: A Big Step for Web Security]]></title><description><![CDATA[Certificate Transparency (CT) has been one of the biggest advancements in web security, keeping users safe from threats such as certificate fraud and man-in-the-middle attacks. While CT has been around for over 11 years, enforcement has varied across...]]></description><link>https://blog.transparency.dev/ct-in-firefox</link><guid isPermaLink="true">https://blog.transparency.dev/ct-in-firefox</guid><category><![CDATA[certificate transparency]]></category><category><![CDATA[Firefox]]></category><category><![CDATA[transparency.dev]]></category><category><![CDATA[Web Security]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Mon, 24 Feb 2025 14:54:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4xmVvHRioKg/upload/1845adcf3d0048cf789a75c0337d6b56.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Certificate Transparency (CT) has been one of the biggest advancements in web security, keeping users safe from threats such as certificate fraud and man-in-the-middle attacks. While CT has been around for over 11 years, enforcement has <a target="_blank" href="https://certificate.transparency.dev/useragents/">varied across browsers</a>.</p>
<p><a target="_blank" href="https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/OagRKpVirsA/m/Q4c89XG-EAAJ"><strong>Firefox is now enforcing Certificate Transparency</strong> on desktop platforms</a>, taking a significant step towards a safer web. With this change, effective in version 135, Firefox will reject certificates that do not comply with CT requirements. This ensures that all certificates trusted by the browser meet high transparency standards.</p>
<h2 id="heading-what-does-this-mean-for-website-owners">What does this mean for Website Owners?</h2>
<p>It ensures that any TLS certificate trusted by Firefox is logged and publicly discoverable in <a target="_blank" href="https://certificate.transparency.dev/logs/">a Certificate Transparency log.</a> If your website already follows best practices and uses CT-compliant certificates, you don’t need to take any additional action. However, if you’re unsure, here are some steps you can take:</p>
<ol>
<li><p><strong>Ensure your Certificate Authority (CA) supports CT logging -</strong> Most major CAs already comply, but if you are using an uncommon CA, verify their status.</p>
</li>
<li><p><strong>Monitor your certificates -</strong> Use Certificate Transparency <a target="_blank" href="https://certificate.transparency.dev/monitors/">monitoring services and tools</a>  to ensure no unauthorized certificates which would be trusted by Firefox and <a target="_blank" href="https://certificate.transparency.dev/useragents/">other CT enforcing user agents</a> are issued for your domain.</p>
</li>
</ol>
<h2 id="heading-firefox-ct-in-action">Firefox CT in Action</h2>
<p>Certificate transparency information can be delivered either as:</p>
<ul>
<li><p>signed certificate timestamps (SCTs) embedded in the certificate itself, or</p>
</li>
<li><p>SCTs stapled alongside the certificate (via the TLS handshake or in an OCSP response).</p>
</li>
</ul>
<p>For a connection to succeed, sufficient certificate transparency information must be provided using either of these methods. See <a target="_blank" href="https://wiki.mozilla.org/SecurityEngineering/Certificate_Transparency#CT_Policy">Firefox CT Policy</a> for more details.</p>
<p>You can see CT enforcement in action for yourself using the <a target="_blank" href="https://no-sct.badssl.com">‘https://no-sct.badssl.com</a>‘ test site which if accessed from Firefox v135 shows the error "MOZILLA_PKIX_ERROR_INSUFFICIENT_CERTIFICATE_TRANSPARENCY" to reflect the fact that the server does not send a Signed Certificate Timestamp (SCT) for the domain of the test site.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740404495849/0e62b10f-fb01-492e-b261-03f64db998df.jpeg" alt="Firefox v135 showing the error &quot;MOZILLA_PKIX_ERROR_INSUFFICIENT_CERTIFICATE_TRANSPARENCY&quot; to reflect the fact that the server does not send a Signed Certificate Timestamp (SCT) for the domain of the test site." class="image--center mx-auto" /></p>
<p>Image by <a target="_blank" href="https://bsky.app/profile/mattm.bsky.social/post/3lhexejiubc2t">Matthew McPherrin on Bluesky</a></p>
<h2 id="heading-firefox-known-ct-logs">Firefox Known CT Logs</h2>
<p>Supporting CT in a browser requires verifying SCTs from an approved set of CT logs. This presents a non-trivial operational issue, especially as CT logs come and go over time. Each browser enforcing CT sets up their own user agent policy for log configuration. For CT in Firefox:</p>
<ul>
<li><p>The list of known trusted logs is derived from <a target="_blank" href="https://www.gstatic.com/ct/log_list/v3/log_list.json">Chrome’s list</a> and is automatically updated each week in prerelease versions of Firefox. This means CT log operators do not need to submit new logs to Firefox.</p>
</li>
<li><p>To see what logs are in a particular version of Firefox, you can examine the history of <a target="_blank" href="https://searchfox.org/mozilla-central/source/security/ct/CTKnownLogs.h">Known CT Logs</a>.</p>
</li>
</ul>
<p>To learn more about Firefox CT policies, see <a target="_blank" href="https://wiki.mozilla.org/SecurityEngineering/Certificate_Transparency">Certificate Transparency in Firefox</a></p>
<h2 id="heading-firefox-amp-tile-based-logs">Firefox &amp; Tile-Based Logs</h2>
<p>As the Certificate Transparency community <a target="_blank" href="https://blog.transparency.dev/what-2025-holds-for-certificate-transparency-and-the-transparencydev-ecosystem">moves toward tile-based logs</a> - supporting <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">static-ct-api</a> logs in addition to RFC6962 logs - questions arise about whether Firefox will follow suit. On the Mozilla dev-security-policy mailing list, Dana Keeler provided a positive indication that Mozilla is open to this approach, especially as industry adoption increases. <a target="_blank" href="https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/lypRGp4JGGE/m/mareQmFjAAAJ">Keeler stated</a>: “<em>If it becomes clear that supporting static-ct-api logs is necessary to interoperate, we will probably allow them as well.</em>”</p>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>The enforcement of Certificate Transparency in Firefox marks a major step forward in web security. With all major browsers now requiring certificates to be logged in CT logs, it makes it harder for attackers to abuse certificates without detection, and means a safer browsing experience for users. For website owners, it’s a reminder to stay vigilant and monitor CT logs for certificates covering their domains.</p>
<p>More broadly, Certificate Transparency continues to evolve and gain adoption across the industry. As browsers also start to look towards accepting tile-based logs, the CT ecosystem is becoming more robust, ensuring greater transparency and security for the web.</p>
]]></content:encoded></item><item><title><![CDATA[I Built a New Certificate Transparency Log in 2024 - Here’s What I Learned]]></title><description><![CDATA[Certificate transparency (CT) is such a useful research tool and I’d been wanting to learn more about it for a while. After Sunlight was announced last year, I decided the best way to learn was to write a CT log, and set off on quite the adventure.
T...]]></description><link>https://blog.transparency.dev/i-built-a-new-certificate-transparency-log-in-2024-heres-what-i-learned</link><guid isPermaLink="true">https://blog.transparency.dev/i-built-a-new-certificate-transparency-log-in-2024-heres-what-i-learned</guid><category><![CDATA[certificate transparency]]></category><category><![CDATA[trillian]]></category><category><![CDATA[tessera]]></category><category><![CDATA[sunlight]]></category><dc:creator><![CDATA[Adit Sachde]]></dc:creator><pubDate>Thu, 23 Jan 2025 15:54:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/lgbbUZ7VHJI/upload/8a97fd16c87082eda1bf29ef0b052ff3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Certificate transparency (CT) is such a useful research tool and I’d been wanting to learn more about it for a while. After <a target="_blank" href="https://sunlight.dev">Sunlight</a> was announced last year, I decided the best way to learn was to write a CT log, and set off on quite the adventure.</p>
<p>The CT ecosystem is evolving. The original specification, <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc6962">RFC6962</a>, was published in June 2013. Subsequent projects such as Go’s SumDB extended the application of transparency logs to other places. The <a target="_blank" href="https://c2sp.org/static-ct-api">Static CT API</a> takes the lessons learned from these projects and applies them back to CT, moving some expensive operations from server-side to client-side. This change drastically simplifies how CT logs function and makes them much cheaper to operate.</p>
<p>To better appreciate the impact of these changes, my CT log implementation, <a target="_blank" href="https://github.com/aditsachde/itko">Itko</a>, supports both RFC6962 and the Static CT API. Itko also has an <a target="_blank" href="https://github.com/aditsachde/itko?tab=readme-ov-file#public-instance">operational instance</a> which is available for querying and testing against.</p>
<h1 id="heading-lessons-learned"><strong>Lessons learned</strong></h1>
<p>The Static CT API is a variant of the more generic <a target="_blank" href="https://c2sp.org/tlog-tiles">tiled transparency log specification</a>, a specification you can use to bring transparency to your own ecosystem – here are some of my takeaways which may be useful for you.</p>
<h2 id="heading-community"><em>Community</em></h2>
<p>First, the most important takeaway for me from this project is how welcoming and helpful the transparency community is. I would not have been able to build Itko without them. The <a target="_blank" href="https://join.slack.com/t/transparency-dev/shared_invite/zt-27pkqo21d-okUFhur7YZ0rFoJVIOPznQ">Transparency.dev Slack</a> and the <a target="_blank" href="https://groups.google.com/a/chromium.org/g/ct-policy">CT policy mailing list</a> are great places to learn, ask questions, and follow along on development.</p>
<h2 id="heading-tooling"><em>Tooling</em></h2>
<p>In 2024, the ecosystem was in a bit of a scattered state. The library and tools exist, but they are spread over a number of packages and projects. Itko uses components from Go’s SumDB, Trillian, and certificate-transparency-go.</p>
<p>Changing this is a focus for the ecosystem going into 2025, and large strides have already been made. <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera">Trillian Tessera</a> is the focal point of this effort to make tiled transparency logs usable by all. Currently out in <a target="_blank" href="https://blog.transparency.dev/announcing-the-alpha-release-of-trillian-tessera">alpha</a>, Trillian Tessera bundles all the common functionality needed by tiled transparency logs, allowing implementers to focus on the bits unique to their ecosystem.</p>
<h2 id="heading-operational-simplicity"><em>Operational Simplicity</em></h2>
<p>Tiled logs are operationally awesome. The only component in the read path is a filesystem, no database needed! And, static files means most requests hit the page cache, not disk. Itko ingests two million certificates and serves millions of requests per day on a quarter of a cpu core, because there is no database involved.</p>
<p>However, write path uptime is at odds with this simplicity. A transparency log needs all its entries to have a unique and ordered sequence number. This introduces a global counter that must be coordinated between nodes in a distributed system. There are a variety of approaches to tackle this problem, but with Itko, I instead chose to not tackle it.</p>
<p>The CT ecosystem baselines 99% uptime due to the presence of multiple redundant logs. Itko takes advantage of this by handling all write operations in a single process, with additional processes in standby mode ready to take over if a short period of unavailability is detected. Instead of targeting the highest uptime possible,  a careful consideration of ecosystem requirements can allow logs to remove a lot of complexity and potential failure modes.</p>
<h2 id="heading-unexpected-visitors"><em>Unexpected visitors</em></h2>
<p>Transparency logs are a great investigative research tool, which means quite a few folks monitor them. The Itko 2025 shard was meant to be a demo and is not considered to be a trusted log by Chrome or Safari. But surprisingly, the log was quickly serving more than 300gb of traffic a day as multiple people were monitoring it!</p>
<p>The number of requests your log receives will have an impact on the most economical choice of storage. Tiled logs fit well with object storage, with the benefit of never having to worry about downtime. However, object storage has a cost on every read and write request, which can be cost prohibitive for public good services like transparency logs. For Itko, I found a shared persistent disk to be a more economical choice, one that also resulted in better performance due to lower write latencies.</p>
<h2 id="heading-testing"><em>Testing</em></h2>
<p>Having good test coverage is always useful, but having a comprehensive test suite is especially important when the servers and clients are written by different people.</p>
<p>When writing Itko, my first step was to write integration tests, adapting the test cases and the <a target="_blank" href="https://github.com/google/certificate-transparency-go/blob/master/trillian/integration/ct_hammer/main.go">CT hammer tool</a> from certificate-transparency-go. Then, I wrote the actual implementation, which allowed for a high degree of confidence in spec compliance. And it totally paid off. After the log passed the test cases, it worked with other OSS tools on the first try.</p>
<h2 id="heading-just-do-it"><em>Just do it!</em></h2>
<p>I had zero understanding of how transparency logs worked when I first started. Over the course of about 40 extremely scattered hours of work, I was able to get Itko to pass its spec compliance test cases. The trust guarantees that transparency logs provide make them seem a bit magical. But today, they are not very complicated to implement. Join the Slack, take a look at Trillian Tessera, and just dive in!</p>
<h3 id="heading-resources"><strong>Resources:</strong></h3>
<ul>
<li><p><a target="_blank" href="https://github.com/aditsachde/itko">Itko Repository and Development Instance</a></p>
</li>
<li><p><a target="_blank" href="https://transparency-dev.slack.com/join/shared_invite/zt-2hw3qg8w6-FTaCast0CflW~EuaDTpLwA#/shared-invite/email">Join the Slack!</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev">Transparency.dev Roadmap</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[What 2025 Holds for Certificate Transparency and the Transparency.dev Ecosystem]]></title><description><![CDATA[Certificate Transparency (CT) has been a cornerstone of internet security for over 11 years. The CT ecosystem has caught tens of thousands of certificate misissuances, numerous backdated certificates, and countless baseline requirement violations. By...]]></description><link>https://blog.transparency.dev/what-2025-holds-for-certificate-transparency-and-the-transparencydev-ecosystem</link><guid isPermaLink="true">https://blog.transparency.dev/what-2025-holds-for-certificate-transparency-and-the-transparencydev-ecosystem</guid><category><![CDATA[sigsum]]></category><category><![CDATA[certificate transparency]]></category><category><![CDATA[transparency.dev]]></category><category><![CDATA[#sigstore]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Wed, 15 Jan 2025 14:45:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/v8aLw1zBGdM/upload/816e0473f4acee207427c232737847f0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Certificate Transparency (CT) has been a cornerstone of internet security for over 11 years. The CT ecosystem has caught tens of thousands of certificate misissuances, numerous backdated certificates, and countless baseline requirement violations. By enabling timely corrective action where necessary, CT has played a critical role in safeguarding the integrity and reliability of the TLS ecosystem. It’s no wonder, then, that in 2024 <a target="_blank" href="https://blog.transparency.dev/certificate-transparency-wins-the-levchin-prize">CT won the prestigious Levchin Prize for Real-World Cryptography</a>, underlining the profound impact CT is making on securing the digital world.</p>
<p>As we step into 2025, the rapid evolution of the digital landscape means security concerns continue to be of paramount importance. The <a target="_blank" href="https://transparency.dev/">Transparency.dev</a> ecosystem has emerged as a pivotal community for fostering innovation in CT and beyond. This post takes a look at some key developments you can expect to see take place in 2025.</p>
<h1 id="heading-tile-based-logs-are-the-future">Tile-based Logs are the Future</h1>
<p>After years of working with transparency logs at scale, the transparency.dev community has found there are better ways to make logs more scalable and cheaper to operate. A new approach ‘<a target="_blank" href="https://transparency.dev/articles/tile-based-logs/">Tile-based Transparency Logs</a>’, built on the concept of exposing storage as “tiles”, has already been tested and proven in several ecosystems since 2021. 2024 was a transformative year for tile-based logs, beginning with the <a target="_blank" href="https://letsencrypt.org/2024/03/14/introducing-sunlight/">release of the Sunlight log by Let’s Encrypt</a>. This was followed by a community developed standardized API for CT tile-based logs known as the <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">Static-CT API</a>, which rapidly became the foundation for new and experimental log implementations (e.g. <a target="_blank" href="https://groups.google.com/a/chromium.org/g/ct-policy/c/-P0rGATLQFA/m/w6hMxgn5AAAJ">itko</a>). The year concluded with the <a target="_blank" href="https://blog.transparency.dev/announcing-the-alpha-release-of-trillian-tessera">alpha release of Tessera</a>, a general-purpose tile-based log developed by the creators of Trillian, further cementing the viability of this innovative approach.</p>
<p>The Chrome CT team have already <a target="_blank" href="https://groups.google.com/a/chromium.org/g/ct-policy/c/W7OSO3SbrFo/m/S2XyhXx_AAAJ">shared plans</a> to start accepting tile-based Static-CT API logs into the ecosystem, signalling a significant evolution in Certificate Transparency. Introducing major changes to a mature ecosystem is a huge undertaking and can often be highly disruptive. For context, <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc9162">RFC 9162</a> was developed as an improved version of <a target="_blank" href="https://www.rfc-editor.org/rfc/rfc6962">RFC 6962</a> but was never widely adopted as the incremental benefits did not justify the cost and complexity of implementation. This highlights the importance of tile-based logs, as their adoption demonstrates the CT ecosystem’s willingness to evolve when the benefits are compelling and transformative.</p>
<h1 id="heading-cloud-native-ct-logs">Cloud Native CT Logs</h1>
<p>While next-generation transparency logs emphasize their tile-based architecture and API, it is equally important to highlight their cloud native approach to storage management. By natively supporting cloud object storage, these logs unlock significant advantages such as operational simplicity, high throughput and the ability to dynamically scale based on usage. Sunlight, for instance, supports S3-compatible object stores, which <a target="_blank" href="https://youtu.be/QZnz8pvsN_w?si=guO0ME8yyTI9IEkw">Let’s Encrypt leverages to reduce costs and scale up easily</a>. In the case of Trillian Tessera, each storage backend is <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/docs/design/philosophy.md#multi-implementation-storage">independently implemented</a> and takes the most native approach for the infrastructure it targets. The Tessera alpha includes support for object storage on both GCP and AWS, showcasing its commitment to leveraging cloud native principles. By embracing cloud object storage, next-generation transparency logs reduce the complexities associated with traditional storage solutions. This shift enables organizations to benefit from high availability, dynamic scaling, and simplified maintenance, paving the way for more efficient and resilient transparency logging systems.</p>
<h1 id="heading-the-growth-of-transparency-ecosystems">The Growth of Transparency Ecosystems</h1>
<p>The CT ecosystem has inspired several other transparency-based ecosystems such as <a target="_blank" href="https://youtu.be/Mp23yQxYm2c?si=-FGc4YJ-6f-y8FWD">Sigsum for transparent key-usage</a> and <a target="_blank" href="https://youtu.be/shUwnSFtP8g?si=zlnmOvozXpdkFBbR">Key Transparency in Proton Mail</a>,</p>
<p>In software supply chain security, <a target="_blank" href="https://www.sigstore.dev/">Sigstore</a> uses transparency logs to make it easer for developers to sign software and for users to verify its authenticity. In 2024, Sigstore saw significant adoption into popular OSS packaging ecosystems such as <a target="_blank" href="https://youtu.be/ZVC9dwmqX2s?si=ygAh_eA1SVpT-zmp">Python</a> and <a target="_blank" href="https://blog.sigstore.dev/homebrew-build-provenance/">Homebrew</a>, demonstrating its practical application and broad impact. Python is moving towards <a target="_blank" href="https://peps.python.org/pep-0761/">using Sigstore exclusively</a> for signing release artifacts, while Homebrew leverages Sigstore to provide verifiable provenance for its binary packages. Sigstore has evolved into being <em>critical internet infrastructure</em> that is relied upon by GitHub, GitLab, and other major software organizations across the planet. Looking ahead to 2025, <a target="_blank" href="https://youtu.be/sv-1ue-nAfo?si=uWEnsmIkrqTQqTy4">Sigstore plan to transition to tile-based logs</a> to take advantage of their lower operational costs and improved scalability.</p>
<p>In general as logs become easier to set up, maintain and operate, expect more industries to take advantage of transparency technologies. This trend could extend beyond traditional applications to emerging areas such as <a target="_blank" href="https://youtu.be/QHOzEkw_9d4?si=npi4uw5cOwbXg69b">AI transparency (transparency in machine level models)</a> and <a target="_blank" href="https://youtu.be/6cNQTMQ5Qgg?si=KecVAmRp5dUhZz58">enhancing trust in digital identity ecosystems</a>.</p>
<h1 id="heading-the-rise-of-standards-and-interoperability">The Rise of Standards and Interoperability</h1>
<p>With the rise of transparency adoption, there has been a parallel evolution of protocols and formats aimed at standardizing transparency systems. This development has led to the creation of more general-purpose and reusable building blocks. These evolving specifications, shaped through implementation, experimentation, and collaboration, have been consolidated in the <a target="_blank" href="https://github.com/C2SP/C2SP">The Community Cryptography Specification Project (C2SP) repository</a>. Key specifications include:</p>
<ul>
<li><p><strong>Generic Checkpoint</strong> - a spec for a <a target="_blank" href="https://c2sp.org/signed-note">signed note</a> where the body is precisely formatted for use in transparency log applications</p>
</li>
<li><p><strong>Generic Tiles</strong> - a <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-tiles.md">spec</a> for an efficient HTTP API to fetch the signed checkpoint, Merkle Tree hashes, and entries of a transparency log containing arbitrary entries.</p>
</li>
<li><p><strong>Generic Witnessing</strong> - a spec for a synchronous HTTP-based protocol to obtain <a target="_blank" href="https://c2sp.org/tlog-cosignature">cosignatures</a> from transparency log witnesses.</p>
</li>
</ul>
<p>These specifications significantly improve interoperability not only within Certificate Transparency (CT) but also across different transparency ecosystems. This growing standardization enables the creation of shared horizontal services that can be utilized across various domains. A prime example of such a service is witnessing. Witnessing plays a crucial role in maintaining the append-only, globally consistent, and verifiable properties of transparency logs. The <a target="_blank" href="https://youtu.be/v9cgvZXRRZU">ArmoredWitness network</a> made a significant advancement in this area by launching the first sustainable, operational network capable of supporting multiple logs, regardless of their payloads and will continue to grow and evolve in 2025.</p>
<h1 id="heading-the-road-ahead">The Road Ahead</h1>
<p>The future of Certificate Transparency and the Transparency.dev ecosystem is bright. As more organizations recognize the importance of transparency in securing digital communications, the demand for robust CT solutions will only grow. While there is much to look forward to in 2025, several challenges remain, not least the looming <a target="_blank" href="https://youtu.be/s0HQCx4wy44?si=PD5PSUDQt3SjRTOO">challenge of adoption post quantum cryptography</a>.</p>
<p>The Transparency.dev community will continue to play a pivotal role in shaping this future. By fostering collaboration, sharing knowledge, and building innovative tools, the ecosystem will empower security professionals and developers to make the digital worlds a safer place for everyone. Join us as we build a more transparent, accountable, and secure digital world!</p>
<ul>
<li><p>Join the transparency-dev <a target="_blank" href="https://transparency.dev/slack/">Slack</a></p>
</li>
<li><p>Follow the <a target="_blank" href="https://blog.transparency.dev/"><strong>Transparency-dev Community Blog</strong></a></p>
</li>
<li><p>Learn more about the <a target="_blank" href="https://github.com/transparency-dev">Transparency.dev Roadmap</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[PostgreSQL Support for Certificate Transparency Logs Released]]></title><description><![CDATA[Certificate Transparency logs can now leverage PostgreSQL’s reliability and performance, giving log operators additional storage choice and flexibility. Trillian, the open source project that powers the certificate transparency (CT) ecosystem, now su...]]></description><link>https://blog.transparency.dev/postgresql-support-for-certificate-transparency-logs-released</link><guid isPermaLink="true">https://blog.transparency.dev/postgresql-support-for-certificate-transparency-logs-released</guid><category><![CDATA[certificate transparency]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Rob Stradling]]></dc:creator><pubDate>Fri, 20 Dec 2024 15:56:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/P4vkOwEav7I/upload/110318ad7d043dbb56339c5df34227d6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Certificate Transparency logs can now leverage PostgreSQL’s reliability and performance, giving log operators additional storage choice and flexibility. Trillian, the open source project that powers the certificate transparency (CT) ecosystem, now supports PostgreSQL as a storage and quota manager backend, thanks to a major contribution by Sectigo. Sectigo, as a public CA and an established part of the CT ecosystem, operates its own logs and runs the popular <a target="_blank" href="http://crt.sh">crt.sh</a> Certificate Search tool. Trillian originally supports Google’s Cloud Spanner as well as MySQL as supported storage backends, though was always designed to be extensible to other storage implementations. And it is not just the CT ecosystem that benefits but any other Trillian-based transparency logs can take advantage of the new storage backend available in the recent release <a target="_blank" href="https://github.com/google/trillian/releases/tag/v1.7.0">Trillian v1.7.0</a>.</p>
<h3 id="heading-why-postgresql">Why PostgreSQL?</h3>
<p>PostgreSQL is a powerful, open source relational database known for its robustness, scalability, and extensibility. Sectigo originally used MariaDB as the transparency log storage backend. However, a <a target="_blank" href="https://groups.google.com/a/chromium.org/g/ct-policy/c/038B7F4g8cU/m/KsOJaEhnBgAJ">CT log failure earlier this year</a> due to MariaDB corruption after disk space exhaustion provided the motivation for a change. PostgreSQL, with its robust Write-ahead Logging (WAL) and strict adherence to ACID (Atomicity, Consistency, Isolation, Durability) principles, made it a better option for avoiding corruption and improving data integrity of the log.</p>
<p>While reliability was the catalyst for this transition, other factors also motivated the shift. The Sectigo team has deep expertise with PostgreSQL, including extensive experience operating <a target="_blank" href="http://crt.sh">crt.sh</a> for years. Their experience, which included architecting several iterations of crt.sh’s CT log ingestion application to accommodate the rapid growth of WebPKI issuance rates, proved invaluable when optimising write performance in this new Trillian feature.</p>
<h3 id="heading-trillian-postgresql">Trillian + PostgreSQL</h3>
<p>Open source thrives on collaboration and contributions and the <a target="_blank" href="http://transparency.dev">transparency.dev</a> community embodies this. Trillian <a target="_blank" href="https://github.com/google/trillian/blob/master/CONTRIBUTING.md">welcomes contributions</a> and using this process Sectigo <a target="_blank" href="https://github.com/google/trillian/pull/3644">opened a PR</a> and worked with Google’s TrustFabric team to incorporate the changes, including tests and documentation, into Trillian. The <a target="_blank" href="https://google.github.io/trillian/docs/Feature_Implementation_Matrix.html">Feature Implementation Matrix</a> highlights PostgreSQL as one of the supported options now, currently in Alpha stage. Sectigo is working on spinning up development logs and will spin up new production logs (code name Elephant) in due course and looks forward to keeping the community up to date.</p>
<h3 id="heading-getting-started-with-trillian-and-postgresql-for-ct">Getting Started with Trillian and PostgreSQL for CT</h3>
<p>To take advantage of using PostgreSQL for CT:</p>
<ol>
<li><p>Install the latest versions which support PostgreSQL: users will require upgrading to</p>
<ul>
<li><p><a target="_blank" href="https://github.com/google/trillian/releases/tag/v1.7.0">Trillian v1.7.0</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/google/certificate-transparency-go/releases/tag/v1.3.0">certificate-transparency-go v1.3.0</a></p>
</li>
</ul>
</li>
<li><p>Set up PostgreSQL.</p>
</li>
<li><p>Configure a PostgreSQL instance with the appropriate schemas and permissions as needed</p>
</li>
<li><p>See <a target="_blank" href="https://github.com/google/trillian/tree/master/examples/deployment">Trillian deployment examples</a> which now include a <a target="_blank" href="https://github.com/google/trillian/tree/master/examples/deployment/postgresql">PostgreSQL docker-compose example</a>.</p>
</li>
</ol>
<p>Whether you’re setting up your first CT log or scaling an existing one, you now have more choices than ever and can now leverage PostgreSQL’s reliability and performance.</p>
]]></content:encoded></item><item><title><![CDATA[Announcing the Alpha release of Trillian Tessera]]></title><description><![CDATA[We are thrilled to announce the alpha release of Trillian Tessera, the tile-based transparency log designed to improve the lives of log operators. Whether you’re working with Certificate Transparency, Binary Transparency, Firmware Transparency or any...]]></description><link>https://blog.transparency.dev/announcing-the-alpha-release-of-trillian-tessera</link><guid isPermaLink="true">https://blog.transparency.dev/announcing-the-alpha-release-of-trillian-tessera</guid><category><![CDATA[tessera]]></category><category><![CDATA[trillian]]></category><dc:creator><![CDATA[TrustFabric Team]]></dc:creator><pubDate>Thu, 05 Dec 2024 18:23:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/jR4Zf-riEjI/upload/4a1b22c3886bbf4bce6da5cca040eb5d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are thrilled to announce the alpha release of Trillian Tessera, the tile-based transparency log designed to improve the lives of log operators. Whether you’re working with Certificate Transparency, Binary Transparency, Firmware Transparency or any transparency log ecosystem, Tessera brings a new level of performance and flexibility to verifiable log ecosystems. A few weeks ago we first <a target="_blank" href="https://blog.transparency.dev/introducing-trillian-tessera">introduced Trillian Tessera</a> and shared its design philosophy. Since then, we’ve been hard at work, and today we’re excited to share the alpha version.</p>
<h3 id="heading-key-features">Key Features</h3>
<ol>
<li><p><strong>Tiles API -</strong> Trillian v1 uses tiles in <a target="_blank" href="https://github.com/google/trillian/blob/master/docs/storage/storage.md">its internal storage</a>, but does not expose them via its API and instead serves pre-built proofs. Trillian Tessera fully embraces the tiles concept by introducing a <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-tiles.md">Tiles API</a>, allowing clients direct access to subtree tiles from the log. This API unlocks the full potential of tile-based architecture and has <a target="_blank" href="https://transparency.dev/articles/tile-based-logs/">many benefits</a> including improved cacheability, and making logs easier to spin up and maintain. Ultimately, the tiled log architecture and API significantly reduce the total cost of operation compared with Trillian v1.</p>
</li>
<li><p><strong>Multi-environment Support</strong>: Trillian Tessera has native implementations for on-premises and cloud environments. Initial support is provided for running with AWS, Google Cloud Platform (GCP), POSIX-compliant file systems, or MySQL databases. Tessera takes the most native approach for infrastructure it targets, with each <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/docs/design/philosophy.md#multi-implementation-storage">storage implementation implemented independently</a>. This enables a new level of performance and flexibility that ensures that no matter your infrastructure, Tessera can be deployed and adapted to a wide range of use cases.</p>
</li>
<li><p><strong>High Write Throughput:</strong> Tessera was built with speed in mind. The <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/docs/design/philosophy.md#asynchronous-integration-in-storage-implementation">refined design</a> separates sequencing and integration of entries to allow for higher write throughput and enables applications to be built with better practices, such as offering offline-proof bundles.</p>
</li>
<li><p><strong>Bring-Your-Own Log Personality</strong>: Trillian Tessera enables building of customizable log personalities, including full support for the unique requirements of <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">static CT API</a>-compliant logs.</p>
</li>
<li><p><strong>Resilience and Availability</strong>: Tessera’s design allows for multiple write frontends to run concurrently on the same storage, ensuring <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/docs/design/philosophy.md#resilience-and-availability">high availability and resilience</a>, and reads to be served directly by storage infrastructure. This setup provides safety measures against accidental misconfigurations, such as launching multiple instances with the same storage settings, offering greater reliability compared to Trillian v1 logs on similar infrastructure.</p>
</li>
</ol>
<h3 id="heading-tessera-performance">Tessera Performance</h3>
<p>Tessera is designed to scale to meet the needs of most currently envisioned workloads in a cost-effective manner. All storage backends have been tested to meet the write-throughput of CT-scale loads without issue. The read API of Tessera based logs scales extremely well due to the immutable resource based approach used, which allows for:</p>
<ul>
<li><p>Aggressive caching to be applied, e.g. via CDN</p>
</li>
<li><p>Horizontal scaling of read infrastructure (e.g. object storage)</p>
</li>
</ul>
<p>Exact performance numbers are highly dependent on the exact infrastructure being used (e.g. storage type). During our alpha testing phase, we ran extensive benchmarks using <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/tree/833a33cb4f7ce7144adf43c72b66af2b27597b7e/internal/hammer">Hammer</a>, our load testing tool for Tessera logs. This was done to ensure Tessera could handle the demands of high-throughput environments, while ensuring the logs were behaving correctly.</p>
<p>We have captured a snapshot of performance metrics for Trillian Tessera storage implementations under different conditions. Additionally, we documented the methodology for users to recreate these experiments independently. For further details, refer to <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/docs/performance.md">Tessera Storage Performance</a>.</p>
<h3 id="heading-get-started">Get Started</h3>
<p>Trillian Tessera is a Go library designed to make it easy to build and deploy new transparency logs on supported infrastructure. To get started, check out the accompanying <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/tree/main/cmd/conformance#conformance-personalities">Conformance Codelab</a>, which provides a step-by-step guide for setting up your first log, writing some entries to it using HTTP, and inspecting its contents with ease.</p>
<p>We’ve made it easy to start experimenting with Tessera by providing these example personalities:</p>
<ol>
<li><p><a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/cmd/conformance/posix">POSIX</a>: example of operating a log backed by a local filesystem. This example runs an HTTP web server that takes arbitrary data and adds it to a file-based log.</p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/cmd/conformance/mysql">MySQL</a>: example of operating a log that uses MySQL. This example is easiest deployed via docker compose, which allows for easy setup and teardown</p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/cmd/conformance/aws">AWS:</a> example of operating a log running on AWS. This example can be deployed via terraform, see the <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/deployment/live/aws/codelab#aws-codelab-deployment">deployment instructions</a>.</p>
</li>
<li><p><a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/deployment/live/aws/codelab#aws-codelab-deployment">GCP:</a> example of operating a log running in GCP. This example can be deployed via terraform, see the <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/tree/main/deployment/live/gcp/conformance#manual-deployment">deployment instructions</a>.</p>
</li>
</ol>
<p>For more information on getting started and to stay updated with future examples, visit the  <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/tree/main?tab=readme-ov-file#getting-started">Getting Started</a> section in the Tessera README</p>
<h3 id="heading-get-involved-in-the-alpha">Get Involved in the Alpha</h3>
<p>As we continue to refine and enhance Trillian Tessera, we invite developers and organizations to start using the alpha. Your feedback on Tessera’s performance and usability is crucial as we work towards a full release. By trying out the alpha, you’ll have the opportunity to influence the direction of development and ensure it meets the needs of the community. We also welcome contributors - check out our <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera/blob/main/CONTRIBUTING.md">contributing guide</a> for more details.</p>
<h3 id="heading-whats-next">What’s Next?</h3>
<p>Looking forward, we have ambitious plans for Trillian Tessera. Future plans will focus on adding features that enhance usability and integration with other tools, especially based on feedback we hear from our early adopters. We are developing a <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">static CT API</a> compatible personality built with Tessera, anticipated in early 2025. Additionally, we are committed to ensuring that existing Trillian users have a smooth migration path to Trillian Tessera logs.</p>
<p>Stay tuned for updates, and if you’re interested in getting involved check out the <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera">Tessera Github repository</a> and sign up to the <a target="_blank" href="https://transparency.dev/slack/">Transparency.dev Slack</a>.</p>
<p>This release marks an important milestone for the Transparency.dev community as tiled-logs become available for any transparency ecosystem. Thank you for your support, and we can’t wait to see what you build with Trillian Tessera!</p>
]]></content:encoded></item><item><title><![CDATA[Can I Get A Witness (Network)?]]></title><description><![CDATA[While transparency logs are recognized as providing the highest standards for trust in the industry, this only remains true if transparency systems deliver on their promise of providing a verifiable, globally consistent, append-only list of data.
Bre...]]></description><link>https://blog.transparency.dev/can-i-get-a-witness-network</link><guid isPermaLink="true">https://blog.transparency.dev/can-i-get-a-witness-network</guid><category><![CDATA[armoredwitness]]></category><category><![CDATA[transparency.dev]]></category><category><![CDATA[#sigstore]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Thu, 07 Nov 2024 14:11:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730905469011/25ee1237-a090-412e-9706-6cbadeed99a9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While transparency logs are recognized as providing the <a target="_blank" href="https://blog.transparency.dev/transparencydev-summit-recap">highest standards for trust in the industry</a>, this only remains true if transparency systems deliver on their promise of providing a verifiable, globally consistent, append-only list of data.</p>
<p>Breaking down these properties:</p>
<ul>
<li><p><strong>Verifiable</strong>: you can cryptographically prove that the data hasn’t been unexpectedly changed</p>
</li>
<li><p><strong>Globally consistent</strong>: whoever comes to the log, wherever they are in the world, and whoever they happen to be, sees the same list of the data in the log</p>
</li>
<li><p><strong>Append-only</strong>: once data is added to the log it can neither be deleted nor modified; new data can only be added to the end of the log</p>
</li>
</ul>
<p>So how do we ensure that these properties are being met in a transparency system? How do we make sure our verifiable systems are actually <em>verified?</em></p>
<h2 id="heading-why-witnessing-matters">Why Witnessing Matters</h2>
<p>This is the exact problem that witnesses were invented to solve. A witness is an entity that implements a <a target="_blank" href="https://arxiv.org/pdf/2011.04551">gossip protocol</a> to effectively verify that logs are <strong>append-only</strong>, ensuring their integrity and preventing unauthorized modifications. When operating in coordination with other such entities, witnesses help safeguard against split view attacks by maintaining <strong>global consistency</strong> for all logs. This collaborative approach ensures that all log users have a unified, tamper-resistant view of the log’s history.</p>
<h2 id="heading-how-do-witnesses-work">How do Witnesses Work?</h2>
<p>A witness cosigns checkpoints consistent with those it previously signed. A witness gets a <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/tlog-checkpoint.md">checkpoint</a> from a transparency log, and signs it only after verifying the consistency proof, ensuring the checkpoint is consistent with any previous checkpoints it has signed. By doing so it asserts the append-only security property of the log. Only a single witness is required to prove a log is append-only.</p>
<p>A set of witnesses (ideally an odd number) can be used to detect split view attacks and help reliably prove that a log is globally consistent. The more witnesses you can trust who have seen the same thing, the more you can trust that the view of the log. Strengthen that further with a finite pool of witnesses and then using a quorum technique N of M to guarantee that the log is globally consistent.</p>
<p>It is important to note that witnesses are not concerned with the contents of the log's leaves. For example for a firmware transparency log, a witness would not be able to validate legit firmware was logged. Verifiers such as auditors or monitors are used to ensure log contents are legitimate. Such verifiers are bespoke to a specific ecosystem, unlike witnesses which are designed to witness almost all logs. Witnesses underpin these verifiers to ensure that the same view of the contents is presented to every log user.</p>
<h2 id="heading-evolution-of-the-witness-network">Evolution of the Witness Network</h2>
<h3 id="heading-omniwitness">OmniWitness</h3>
<p>The first significant implementation of a witness network was the <a target="_blank" href="https://github.com/transparency-dev/witness/tree/main/cmd/omniwitness">OmniWitness</a> network. Developed by the TrustFabric team, the OmniWitness network monitored several logs that use the <a target="_blank" href="https://github.com/transparency-dev/formats/tree/main/log">generic checkpoint format</a>. These logs include:</p>
<ul>
<li><p>Go SumDB</p>
</li>
<li><p>Sigsum</p>
</li>
<li><p>Sigstore</p>
</li>
<li><p>Android Binary Transparency</p>
</li>
<li><p>LVFS</p>
</li>
<li><p>Armored Witness</p>
</li>
</ul>
<p>A number of participants helped run the OmniWitness network and get it working. However one significant problem that arose was effectively handling policy updates (e.g. add a new log to the monitor list) and software updates, to ensure they were more deterministic. Nevertheless it was an invaluable way to kick start the concept.</p>
<h3 id="heading-armoredwitness">ArmoredWitness</h3>
<p>To create a more scalable and viable witness network, <a target="_blank" href="https://github.com/transparency-dev/armored-witness">ArmoredWitness</a> introduces a specialized, lightweight, plug and play solution that custodians around the world can easily install, verify and maintain. This iteration builds on the strengths of the OmniWitness software, integrating it with a custom, open source hardware chip and tailored packaging to streamline deployment.</p>
<p>The ArmoredWitness was engineered to be fully transparent, open to inspection and verification. Key features include:</p>
<ul>
<li><p>Open source hardware and software</p>
</li>
<li><p>Reproducible builds with a firmware transparency log: updates are easily managed and traceable, establishing clear, auditable provenance.</p>
</li>
<li><p>Full boot-chain verification from mask ROM to running software</p>
</li>
</ul>
<p>Currently, 15 ArmoredWitness devices are deployed with trusted custodians, collaboratively bootstrapping a diverse, tamper-resistant witness network. This network not only offers greater resiliency and minimizes risk of team coercion, but also serves as a living example of fully-end-to-end firmware transparency in action.</p>
<h2 id="heading-looking-forward-the-future-of-witness-networks">Looking Forward: The Future of Witness Networks</h2>
<p>As the adoption of transparency logs grows, implementing a comprehensive set of transparency roles to uphold core transparency principles is more important than ever. Witnessing plays a key role in this ecosystem, underpinning the append-only, globally consistent and verifiable properties. The ArmoredWitness network advances the field by offering the first sustainable, operational network capable of supporting multiple logs, regardless of payload.</p>
<p>Looking forward, recent community discussions focussed on a number of areas including:</p>
<ul>
<li><p>Driving adoption of the ArmoredWitness network</p>
</li>
<li><p>Addressing scalability challenges such as configuration and format standardization</p>
</li>
<li><p>Establishing robust policy frameworks </p>
</li>
<li><p>Increasing number of witnesses as well as reputational units in the ecosystem. (TrustFabric, as the operators of the ArmoredWitness, are the reputational unit behind the current network.) The more independent reputational units running witness networks, the more the ecosystem is strengthened.</p>
</li>
</ul>
<p>By tackling these challenges, we can build towards the goal of ensuring resilient, scalable witness networks that reinforce transparency and trust across log ecosystems.</p>
<p><em>This blog post is based on the talk ‘</em><a target="_blank" href="https://youtu.be/v9cgvZXRRZU?si=ACAVHUffrKNyLPVQ"><em>ArmoredWitness, putting verifiable transparency in your verifiable transparency</em></a><em>’ by Martin Hutchinson.</em></p>
<p><div class="embedded-content"><a class="embed-card" data-card-width="600px" data-card-key="2e4d628b39a64b99917c73956a16b477" data-card-controls="0" data-card-theme="light" href="https://youtu.be/v9cgvZXRRZU?si=xwylnSjCsaVA6hi"></a></div></p>
]]></content:encoded></item><item><title><![CDATA[Bullseyes and Breakthroughs: A Recap of the Transparency.dev Summit]]></title><description><![CDATA[It’s a special day when you find yourself in London, playing darts while discussing transparency technologies, and with a couple of Levchin prize winners on hand for added inspiration. That was the scene on Wednesday 9th of October after the first da...]]></description><link>https://blog.transparency.dev/transparencydev-summit-recap</link><guid isPermaLink="true">https://blog.transparency.dev/transparencydev-summit-recap</guid><category><![CDATA[transparency.dev]]></category><category><![CDATA[community]]></category><category><![CDATA[certificate transparency]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Tue, 22 Oct 2024 15:38:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729188659116/5c15a7de-23a4-43fa-9fdf-84cee2ff3e9d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It’s a special day when you find yourself in London, playing darts while discussing transparency technologies, and with a couple of Levchin prize winners on hand for added inspiration. That was the scene on Wednesday 9th of October after the first day of the Transparency.dev summit. The event was a 3-day event bringing together practitioners of transparency technologies to meet peers, share best practices, and learn about the latest developments in the community. Here are some of the highlights</p>
<h3 id="heading-evolution-of-the-bar-for-trust">Evolution of the Bar for Trust</h3>
<p>The summit kicked off with an inspiring keynote from Al Cutter who took us through the fascinating evolution of transparency technologies. He shared how these technologies have redefined trust in the industry and underscored the growing importance of transparency as the gold standard. Cutter also highlighted <a target="_blank" href="https://github.com/C2SP/C2SP">C2SP specs</a> which are driving interoperability across ecosystems. Cutter’s insights set a powerful tone for the rest of the summit, framing the discussions around how we can drive further innovation through collaboration in open source.</p>
<p><div class="embedded-content"><a class="embed-card" data-card-width="600px" data-card-key="2e4d628b39a64b99917c73956a16b477" data-card-controls="0" data-card-theme="light" href="https://youtu.be/IJfgKh3gwOQ?si=IG03ymKEQ89yaZ_"></a></div></p>
<h3 id="heading-tiled-logs-are-the-future">Tiled Logs are the Future</h3>
<p>Matthew McPherrin shared the operator pain of logs in <a target="_blank" href="https://youtu.be/QZnz8pvsN_w?si=8f5fXNk6cnfbCd-N">his talk</a> about Let’s Encrypt’s experience and the evolution of the <a target="_blank" href="https://sunlight.dev/">Sunlight log</a>. Sunlight has led to the development of the standardized <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">static-ct API</a>. Al Cutter and Martin Hutchinson also <a target="_blank" href="https://youtu.be/9j_8FbQ9qSc?si=qn0ET1uhfToEC7co">introduced Trillian Tessera</a>, a new tiled based log that supports the static-ct API but also works for any transparency log. Hayden Blauzvern gave an <a target="_blank" href="https://youtu.be/au_nkj0iBj8?si=vMZI_f1GsxInJWIB">overview of Sigstore</a> and highlighted their efforts to capitalize on tiled logs, as they experiment with Trillian Tessera with the goal of enhancing simplicity and reducing costs.</p>
<h3 id="heading-a-witness-network-is-coming">A Witness Network is Coming</h3>
<p>Andrea Barisani, Filippo Valsorda, Martin Hutchinson and Al Cutter came together to explore the journey from the <a target="_blank" href="https://youtu.be/v9cgvZXRRZU?si=DGhVbxfJZzvwM_Lc">foundational concept of “witnessing”</a> to the realization of <a target="_blank" href="https://youtu.be/uZXESulUuKA?si=bLCbHBhALvBNlWCL">fully transparent, end-to-end ecosystems</a>. Their four talks interlocked seamlessly to provide a holistic view of witnessing and its broader impact.</p>
<h3 id="heading-certificate-transparency-continues-to-grow-and-evolve">Certificate Transparency Continues to Grow and Evolve</h3>
<p>Joe DeBlasio and Philippe Boneff spoke about <a target="_blank" href="https://youtu.be/59PU99hQfro?si=nAxmpDW2Acrh_6Db">the challenges transparency ecosystems run into when they scale</a>, and highlighted that CT is going strong, and is a great model for other ecosystems. CT continues to evolve and address challenges such as <a target="_blank" href="https://youtu.be/s0HQCx4wy44?si=uNMz1dzbnzCTHseU">adopting post-quantum cryptography</a>.</p>
<h3 id="heading-transparency-is-successful-the-community-is-awesome">Transparency is Successful, the Community is Awesome!</h3>
<p>Filippo Valsorda emphasized the community’s success, noting how it has broken out of its niche and reached a broader audience. This progress was further reflected in the diversity of the talks, such as Edona Fasllija's presentation on <a target="_blank" href="https://youtu.be/6cNQTMQ5Qgg?si=9m0f6jGiMSQa4wKQ">digital identity</a>, Peter Well’s overview on using <a target="_blank" href="https://youtu.be/Rj8l6Fd_iBg?si=OjBzSlVXRz__mtJ6">transparency in healthcare</a>, and many others, showcasing the wide range of topics now engaging the community. Key transparency and binary transparency were also <a target="_blank" href="https://youtu.be/EdnbGrnN9Zg?si=ZVdM9e5lE7zc3JOo">important topics for discussion</a>.</p>
<p>The summit was a great way for the community to connect and have fun.</p>
<p>“<em>This community is filled with people that want to do the right thing</em>” - Joe DeBlasio, Chrome</p>
<p>“<em>Huge thanks to the #trustfabric and all the org team members for the hard and wonderful work. And to the warm Transparency community for its capacity to welcome new members even if they are not core contributors (by far!)”  -</em> Christophe Brocas, Assurance Maladie security engineer</p>
<p>“<em>My favorite workshop of the year.</em>” - Bas Westerbaan, Cloudflare</p>
<p>If you missed the event or want to stay in touch with the community for future events, check out these links:</p>
<p>Also stay in touch with the community through these channels:</p>
<ul>
<li><p>Join the transparency-dev <a target="_blank" href="https://transparency.dev/slack/">Slack</a></p>
</li>
<li><p>Watch the talk recordings on the <a target="_blank" href="https://www.youtube.com/channel/UC_CUBXljwl-7tmYeOs-7ukQ">transparency-dev YouTube channel</a></p>
</li>
<li><p>Follow the <a target="_blank" href="https://blog.transparency.dev/">Transparency-dev Community Blog</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Introducing Trillian Tessera]]></title><description><![CDATA[Today, we’re introducing Trillian Tessera, a Go library for building tile-based transparency logs (tlogs). As the next evolution of Trillian, Trillian Tessera implements tile-based logs to simplify and reduce the cost associated with building and ope...]]></description><link>https://blog.transparency.dev/introducing-trillian-tessera</link><guid isPermaLink="true">https://blog.transparency.dev/introducing-trillian-tessera</guid><category><![CDATA[tlogs]]></category><category><![CDATA[transparency]]></category><category><![CDATA[certificate transparency]]></category><dc:creator><![CDATA[TrustFabric Team]]></dc:creator><pubDate>Fri, 16 Aug 2024 15:03:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/0hUDzambiwA/upload/982f9d429fc3d63f0f983234e6a9ee99.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, we’re introducing Trillian Tessera, a Go library for building tile-based transparency logs (tlogs). As the next evolution of Trillian, Trillian Tessera implements <a target="_blank" href="https://research.swtch.com/tlog#tiling_a_log">tile-based logs</a> to simplify and reduce the cost associated with building and operating transparency logs. Tile-based logs strengthen transparency systems by being more efficient and cost-effective, providing clients with higher performance.</p>
<h3 id="heading-why-tiles">Why Tiles?</h3>
<p>In this article titled <a target="_blank" href="https://transparency.dev/articles/tile-based-logs/">Tile-Based Transparency Logs</a>, Jay Hou outlines how the tiles approach works and has been tested and proven in ecosystems such as the Go module checksum database and Pixel binary transparency. The article also outlines the many benefits of a tiles-based architecture:</p>
<ul>
<li><p>Improved cacheability</p>
</li>
<li><p>Logs are easier to spin up</p>
</li>
<li><p>Logs are easier to maintain</p>
</li>
<li><p>Static responses</p>
</li>
<li><p>Uniform interface across logs and ecosystems e.g. witness network can be shared across all logs</p>
</li>
</ul>
<h3 id="heading-why-tessera">Why Tessera?</h3>
<p>The name “Tessera” was chosen to reflect the core concept of the new design. A <em>tessera</em> is a small square <strong>tile</strong> of stone, glass, marble etc. that was used in mosaics. This is an apt metaphor for our project, as it emphasizes the tile-based structure of transparency logs.</p>
<h3 id="heading-trillian-tessera-design-philosophy">Trillian Tessera Design Philosophy</h3>
<p>While a tiles-native API is central to the design, we also incorporated several insights gained over the years. These insights have informed our key design philosophy.</p>
<ul>
<li><p><strong>Simplicity</strong> - Tessera is designed to be user-friendly, easy to adopt, and straightforward to maintain. It offers a more cost-effective and simpler operation compared to Trillian v1. As a  library rather than a microservice architecture, Tessera aims to facilitate the easy creation and deployment of new transparency logs on supported infrastructure.</p>
</li>
<li><p><strong>Multi-implementation Storage</strong> - Each supported storage infrastructure for Trillian Tessera is independently implemented, and takes the most "native" approach for the targeted infrastructure. This includes support for both cloud and on-premises environments. Initially we will provide support for GCP and AWS. Additionally, Tessera will offer cloud-agnostic support for MySQL and POSIX file systems, making it cloud-agnostic and scalable. This allows Tessera to accommodate a range of transparency log use-cases, from high-performance, high-uptime logs like Certificate Transparency (CT) and Sigstore, to those with more modest requirements, such as Firmware Transparency.</p>
</li>
<li><p><strong>Fast Sequencing and integration of Entries</strong> - In Trillian, entries were added to a log via a fully decoupled queue, providing a timestamp and a future promise of integration. With Trillian Tessera, we have refined the storage contract so that calls to add entries to the log will return with a durably assigned sequence number or an error, with integration occurring within seconds. This separation of sequencing and integration allows for higher write throughput and enables applications to be built with better practices, such as offering offline-proof bundles.</p>
</li>
</ul>
<h3 id="heading-trillian-tessera-status">Trillian Tessera Status</h3>
<p>Trillian Tessera is currently under active development, and is not yet ready for general use. However, early feedback is very welcome. We expect an alpha release by Q4 2024, and production ready anticipated in the first half of 2025. We are also developing a <a target="_blank" href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md">CT static API</a> compatible server built with Tessera, which is expected to be released on a similar timeline. Trillian v1 is still in use in production environments by multiple organisations in multiple ecosystems, and is likely to remain so for the mid-term. New ecosystems, or existing ecosystems looking to evolve, should strongly consider planning a migration to Tessera and adopting the patterns it encourages.</p>
<h3 id="heading-get-started">Get Started</h3>
<p>You can learn more about Trillian Tessera and try it out here: <a target="_blank" href="https://github.com/transparency-dev/trillian-tessera">https://github.com/transparency-dev/trillian-tessera</a></p>
]]></content:encoded></item><item><title><![CDATA[Join us at the Transparency.dev Summit]]></title><description><![CDATA[Join us for the Transparency.dev Summit, a 3-day conference for implementers, operators, and users of real world transparency systems. The event takes place on October 9 to 11, 2024 in London, UK.
Certificate Transparency, now in its 11th year, has l...]]></description><link>https://blog.transparency.dev/join-us-at-the-transparencydev-summit</link><guid isPermaLink="true">https://blog.transparency.dev/join-us-at-the-transparencydev-summit</guid><category><![CDATA[certificate transparency]]></category><category><![CDATA[transparencydevsummit ]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Tue, 23 Jul 2024 12:54:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721738880342/26d80a4a-8f49-4617-b6a4-c592a34174ca.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Join us for the <a target="_blank" href="https://transparency.dev/summit/">Transparency.dev Summit</a>, a 3-day conference for implementers, operators, and users of real world transparency systems. The event takes place on October 9 to 11, 2024 in London, UK.</p>
<p>Certificate Transparency, now in its 11th year, has led the way making the issuance of website certificates transparent and verifiable.  Transparency technology has been adopted in various other domains including software supply chains, firmware transparency and binary transparency. Transparency technology continues to evolve to be easier to use and maintain as well as work in multiple use-cases. The evolution and adoption of transparency technologies reflects a growing emphasis on accountability and trust in our digital ecosystems.</p>
<p>The Transparency.dev Summit aims to bring together implementers, operators, and clients of real world transparency systems in order to meet peers, share best practices, and learn about the latest developments in the community. In addition to general transparency topics, the event will also feature dedicated sessions for the Certificate Transparency community.</p>
<p>Attendees will have the opportunity to learn more about transparency systems at scale and the latest developments in the transparency ecosystem. Some central themes of the conference will be:</p>
<ul>
<li><p>Witnessing: A key goal for transparency is to have a cross-ecosystem witness network that can provide protection to prevent split-view attacks. Talks here will examine the latest developments in the witness network and related technologies.</p>
</li>
<li><p>Certificate Transparency: This will highlight lessons learned from log operators, especially those that can be applied to newer logs and ecosystems looking to emulate the success CT has achieved for over 10 years.</p>
</li>
<li><p>Key Transparency: This emerging space aims to solve the problem of key discovery for providers of end-to-end encryption services.</p>
</li>
<li><p>Binary Transparency: This is a wide umbrella term for bringing enhanced security to language ecosystems e.g. with Sigstore to firmware transparency and more</p>
</li>
<li><p>Transparency futures: learn about the new and recent evolutions in transparency technologies that we can look forward to that make logs easier to run and maintain.</p>
</li>
</ul>
<p>Some of the Trust Fabric team who initiated the conference share their thoughts about the event:</p>
<p>“<em>Transparency.dev Summit, dedicated to real-world transparency systems, is the first event of its kind. After much anticipation, the perfect conditions and growing interest in transparency technologies have come together to make this event a reality. We’re eager to gather the entire community, to connect, share, and see what incredible things we can achieve together.</em>” - Al Cutter, <a target="_blank" href="https://blog.transparency.dev/certificate-transparency-wins-the-levchin-prize">recent Levchin Prize Winner</a>.</p>
<p>“<em>I’m looking forward to seeing everyone and having meaningful conversations about transparency technology, continuing where we left off at CATS. I’m also excited to meet new faces and explore fresh ideas together!</em>” - Martin Hutchinson</p>
<p>“<em>I hope that I can go!</em>” - Jay Hou</p>
<p>If you are interested in submitting a proposal or would like to learn more about the event, please visit <a target="_blank" href="https://transparency.dev/summit/">transparency.dev/summit</a> for the latest information. Registration is not yet open though you can register your interest here: <a target="_blank" href="https://docs.google.com/forms/d/1BDkE_94DFsKwJg7MF6T1i-e2K3Nh8lYotR3vzi9puOQ/edit#responses">Interest Survey</a>.</p>
<p>We hope you can join us and be part of the conversation driving the future of security and trust in our digital worlds through transparency technologies.</p>
]]></content:encoded></item><item><title><![CDATA[Certificate Transparency wins the Levchin Prize]]></title><description><![CDATA[This year, the prestigious Levchin Prize for Real-World Cryptography was awarded to the pioneers of Certificate Transparency (CT): Al Cutter, Emilia Käsper, Adam Langley, and Ben Laurie. This recognition highlights the critical role CT plays in enhan...]]></description><link>https://blog.transparency.dev/certificate-transparency-wins-the-levchin-prize</link><guid isPermaLink="true">https://blog.transparency.dev/certificate-transparency-wins-the-levchin-prize</guid><category><![CDATA[certificate transparency]]></category><category><![CDATA[community]]></category><dc:creator><![CDATA[Tracy Miranda]]></dc:creator><pubDate>Wed, 15 May 2024 18:37:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1720032741330/8820fed2-f085-4daf-ba4f-f29891abc35a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This year, the prestigious Levchin Prize for Real-World Cryptography was awarded to the pioneers of Certificate Transparency (CT): <strong>Al Cutter, Emilia Käsper, Adam Langley, and Ben Laurie</strong>. This recognition highlights the critical role CT plays in enhancing trust through transparency.</p>
<h3 id="heading-the-significance-of-the-levchin-prize">The Significance of the Levchin Prize</h3>
<p>The <a target="_blank" href="https://rwc.iacr.org/LevchinPrize/index.html#about">Levchin Prize</a> for Real-World Cryptography, named after <a target="_blank" href="https://en.wikipedia.org/wiki/Max_Levchin">Max Levchin</a>, is awarded annually to individuals and projects that have made substantial contributions to practical cryptography. The prize recognizes achievements that have had a profound impact on the security of the digital world. Past winners include the OpenSSL Team, Ralph Merkle and Let's Encrypt.</p>
<p>Awarding the Levchin Prize to Certificate Transparency acknowledges its transformative effect on internet security. By ensuring that all certificates are publicly visible and verifiable, CT has drastically reduced the risk of certificate-related attacks, making the web a safer place for users and businesses alike.</p>
<h3 id="heading-we-wanted-to-give-up"><strong>“We Wanted To Give Up”</strong></h3>
<p>The award was announced at the Real World Crypto 2024, chaired by Dan Boneh. Emilia Käsper, Adam Langley, and Ben Laurie were present at the ceremony to receive the award.</p>
<p>In her acceptance speech, Emilia Käsper looked back at the CT journey starting with the launch of the first CT log, Pilot, in 2013. Emilia revealed that less than a year in, the team could not get the design to work with the real life constraints and were on the verge of giving up. Despite the challenges the team did not give up, so how did they push through? By <em>believing</em> there had to be a way.</p>
<h3 id="heading-the-next-challenges"><strong>The Next Challenges</strong></h3>
<p>Ben Laurie followed up by sharing the next things we should all be believing in and focussing on. He says “<em>The bad news is that the real work of real-world crypto is not crypto, it’s actually user-experience, ecosystem change and systems engineering.”</em></p>
<p>Ben also highlighted two really huge problems to address in Public Key Infrastructure: Time and Identity. You can watch the whole award ceremony here:</p>
<p><div class="embedded-content"><a class="embed-card" data-card-width="600px" data-card-key="2e4d628b39a64b99917c73956a16b477" data-card-controls="0" data-card-theme="light" href="https://youtu.be/rM__xhcqPo4?si=2mnLeXaIaOLlE85W&amp;t=62"></a></div></p>
<h3 id="heading-the-power-of-community"><strong>The Power of Community</strong></h3>
<p>Ben also acknowledged in his speech that CT took dozens, if not hundreds, maybe even thousands of people to get it deployed and thanked all of them too. Likewise, Al Cutter who was the first engineer on CT and still head of the engineering team that supports CT at Google said: “<em>This recognition also reflects onto everyone who played a part in making CT a reality</em>.”</p>
<p>Congratulations to the entire community on this well-deserved achievement! 👏👏👏</p>
]]></content:encoded></item></channel></rss>