pixl97 17 hours ago

Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

  • rsstack 17 hours ago

    > I've seen most of them moving to internally signed certs

    Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.

    • pavon 17 hours ago

      Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.

      • bravetraveler 10 hours ago

        > A lot more work

        'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.

        If you're at the scale past what IPA/your domain can manage, well, c'est la vie.

        • Spivak an hour ago

          I think you're being generous if you think the average "cloud native" company is joining their servers to a domain at all. They've certainly fallen out of fashion in favor of the servers being dumb and user access being mediated by an outside system.

          • bravetraveler 20 minutes ago

            I think folks are being facetious wanting more for free. The solutions have been available for literal decades, I was deliberate in my choice.

            Not the average, certainly the majority where I've worked. I have - on good authority - at least two household clouds that do enroll their hypervisors to a domain. I'll let you guess which.

            My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.

            The devices are already managed; you've deployed them to your fleet.

            No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!

            Don't complain to me about 'your' choices. Self-selected problem if I've heard one.

            Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.

            Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.

            Literal Clouds do this, why can't you?

  • plorkyeran 15 hours ago

    This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.

    • ozim 3 hours ago

      Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.

      Non browser things usually don’t care even if cert is expired or trusted.

      So I expect people still to use WebPKI for internal sites.

      • nickf an hour ago

        'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.

      • ryao 3 hours ago

        Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.

        That said, it would be really nice if they supported DANE so that websites do not need CAs.

  • xienze 17 hours ago

    > but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

    Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.

    • pixl97 17 hours ago

      Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.

      • xienze 17 hours ago

        Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.

        • hedora 10 hours ago

          Also, moving termination off the endpoint server makes it much easier for three letter agencies to intercept + log.

    • cryptonym 17 hours ago

      You now have to build and self-shot a complete CA/PKI.

      Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

      • stackskipton 17 hours ago

        You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.

        • pixl97 15 hours ago

          I'm pretty sure every bank will auto fail wildcard certs these days, at least the ones I've worked with.

          Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.

ghusto 17 hours ago

I really wish encryption and identity weren't so tightly coupled in certificates. If I've issued a certificate, I _always_ care about encryption, but sometimes do not care about identity.

For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.

Pet peeve.

  • tptacek 17 hours ago

    There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.

    • panki27 17 hours ago

      How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?

      • oconnor663 17 hours ago

        They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.

        • panki27 16 hours ago

          You can not MITM a key that is being exchanged through Diffie-Hellman, or have I missed something big?

          • Ajedi32 15 hours ago

            Yes, Mallory just pretends to be Alice to Bob and pretends to be Bob to Alice, and they both establish an encrypted connection to Mallory using Diffie-Hellman keys derived from his secrets instead of each other's. Mallory has keys for both of their separate connections at this point and can do whatever he wants. That's why TLS only uses Diffie-Hellman for perfect forward secrecy after Alice has already authenticated Bob. Even if the authentication key gets cracked later Mallory can't reach back into the past and MITM the connection retroactively, so the DH-derived session key remains protected.

      • Ajedi32 17 hours ago

        It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)

      • simiones 17 hours ago

        Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.

        • gruez 17 hours ago

          >Connections never start as encrypted, they always start as plain text

          Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.

          https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...

          • Ajedi32 16 hours ago

            GP means unencrypted at the wire level. ClientHelloOuter is still unencrypted even with HSTS.

        • jiveturkey 4 hours ago

          Chrome started doing https-first since April 2021 (v90).

          Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.

          Firefox 136 (2025) now does https first as well.

    • steventhedev 17 hours ago

      There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.

      • tptacek 17 hours ago

        MITM scenarios are more common on the 2025 Internet than passive attacks are.

        • steventhedev 17 hours ago

          MITM attacks are common, but noisy - BGP hijacks are literally public to the internet by their nature. I believe that insisting on coupling confidentiality to authenticity is counterproductive and prevents the development of more sophisticated security models and network design.

          • orev 16 hours ago

            You don’t need to BGP hijack to perform a MITM attack. An HTTPS proxy can be easily and transparently installed at the Internet gateway. Many ISPs were doing this with HTTP to inject their own ads, and only the move to HTTPS put an end to it.

            • steventhedev 12 hours ago

              Yes. MITM attacks do happen in reality. But by their nature they require active participation which for practical purposes means leaving some sort of trail. More importantly is that by decoupling confidentionality from authenticity, you can easily prevent eavesdropping attacks at scale.

              Which for some threat models is sufficiently good.

              • tptacek 11 hours ago

                This thread is dignifying a debate that was decisively resolved over 15 years ago. MITM is a superset of the eavesdropper adversary and is the threat model TLS is designed to risk.

                It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.

                • pyuser583 4 hours ago

                  As someone who had to set up monitoring software for my kids, I can tell you MITM are very real.

                  It’s how I know what my kids are up to.

                  It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.

                  Identity really is security.

                • steventhedev 4 hours ago

                  TLS chose the threat model that includes MITM - there's no good reason that should ever change. All I'm arguing is that having a middle ground between http and https would prevent eavesdropping, and that investment elsewhere could have been used to mitigate the MITM attacks (to the benefit of all protocols, even those that don't offer confidentiality). Instead we got OpenSSL and the CA model with all it's warts.

                  More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.

        • BobbyJo 17 hours ago

          What does their commonality have to do with the use cases where they aren't viable?

    • jchw 17 hours ago

      I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)

      *I mistakenly wrote "certificate" here initially. Sorry.

      • hedora 10 hours ago

        TOFU is not less secure than using a certificate authority.

        Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.

        • tptacek 9 hours ago

          TOFU is less secure than using a trust anchor.

          • hedora 9 hours ago

            That’s only true if you operate the trust anchor (possible) and it’s not an attack vector (impossible).

            For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.

            Alternatively, you could manually verify + pin certs after first use.

            • tptacek 9 hours ago

              There are a couple of these concepts --- TOFU (key continuity) is one, PAKEs are another, pinning a third --- that sort of float around and captivate people because they seem easy to reason about, but are (with the exception of Magic Wormhole) not all that useful in the real world. It'd be interesting to flesh out the complete list of them.

              The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.

      • tptacek 15 hours ago

        SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.

        • jchw 15 hours ago

          I've made some critical mistakes in my argument here. I am definitely not referring to using SSH TOFU in a fleet. I'm talking about using SSH TOFU with long-lived machines, like your own personal computers, or individual long-running servers.

          Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.

          To be clear, there are a lot of obvious security problems with this:

          - It relies on me actually checking the fingerprint.

          - SSH keys are valid and trusted indefinitely, so it has to be rotated manually.

          - The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.

          This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.

          As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.

          That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.

          • tptacek 14 hours ago

            I don't understand any of this. If you want TOFU for TLS, just use self-signed certificates. That makes sense for your own internal stuff. For good reason, the browser vendors aren't going to let you do it for public resources, but that doesn't matter for your use case.

            • jchw 13 hours ago

              Self-signed certificates have a terrible UX and worse security; browsers won't remember the trusted certificate so you'd have to verify it each time if you wanted to verify it.

              In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.

              • tptacek 13 hours ago

                Just add the self-signed certificate. It's literally a TOFU system.

                • jchw 12 hours ago

                  But again, you then get (much) worse UX than plaintext HTTP, it won't even remember the certificate. The thing that makes TOFU work is that you at least only have to verify the certificate once. If you use a self-signed certificate, you have to allow it every session.

                  A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.

                  • tptacek 12 hours ago

                    Yes, it will.

      • gruez 17 hours ago

        >I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.

        Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.

        • jchw 15 hours ago

          I'm not arguing for replacing existing uses of HTTPS here, just cases where you would today use self-signed certificates or plaintext.

      • arccy 17 hours ago

        ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.

        TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.

        • tptacek 15 hours ago

          Intercepting and exploiting first-contact SSH sessions is a security conference sport. People definitely do it.

        • jchw 16 hours ago

          I just typed the wrong thing, fullstop. I meant to say server keys; fixed now.

          Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.

      • pabs3 17 hours ago

        You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.

  • Ajedi32 17 hours ago

    In what situation would you want to encrypt something but not care about the identity of the entity with the key to decrypt it? That seems like a very niche use case to me.

    • xyzzy123 17 hours ago

      Because TLS doesn't promise you very much about the entity which holds the key. All you really know is that they they control some DNS records.

      You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.

      • Ajedi32 17 hours ago

        It tells you the entity which holds the key is the actual owner of myfavouriteshoes.com, and not just a random guy operating the free Wi-Fi hotspot at the coffee shop you're visiting. If you don't care about that then why even bother with encryption in the first place?

        • xyzzy123 17 hours ago

          True.

          OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.

          Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.

          • yjftsjthsd-h 15 hours ago

            If you really don't care, sometimes you can just go plantext HTTP. I do this for some internal things that are accessed over VPN links. Of course, that only works if you're not doing anything that browsers require HTTPS for.

            Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.

      • arccy 17 hours ago

        at least it's not evil-government-proxy.com that decided to mitm you and look at your favorite shoes.

        • xyzzy123 17 hours ago

          Indeed and the system is practically foolproof because the government cannot take over DNS records, influence CAs, compromise cloud infrastructure / hosting, or rubber hose the counter-party to your communications.

          Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.

  • ryao 3 hours ago

    If web browsers supported DANE, we would not need CAs for encryption.

  • silverwind 5 hours ago

    I agree, there needs to be a TLS without certificates. Pre-shared secrets would be much more convenient in many scenarios.

    • ryao 3 hours ago

      How about TLS without CAs? See DANE. If only web browsers would support it.

  • grishka 17 hours ago

    I want a middle ground. Identity verification is useful for TLS, but I really wish there was no reliance on ultimately trusted third parties for that. Maybe put some sort of identity proof into DNS instead, since the whole thing relies on DNS anyway.

  • Vegenoid 7 hours ago

    Isn't identity the entire point of certificates? Why use certificates if you only care about encryption?

  • panki27 17 hours ago

    Isn't this excatly the reason why LetsEncrypt was brought to life?

  • charcircuit 17 hours ago

    Having them always coupled disincentivizes bad ISP's from MITM the connection.

greatgib 17 hours ago

As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain. Only the big one embedded in browser will have the receive to have their own CA certificate with whatever period they want...

And in term of security, I think that it is a double edged sword:

- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it

- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.

As a side note, I'm totally laughing at the following explanation in the article:

   47 days might seem like an arbitrary number, but it’s a simple cascade:
   - 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
  • nickf an hour ago

    Certificate pinning to public roots or CAs is bad. Do not do it. You have no control over the CA or roots, and in many cases neither does the CA - they may have to change based on what trust-store operators say. Pinning to public CAs or roots or leaf certs, pseudo-pinning (not pinning to a key or cert specifically, but expecting some part of a certificate DN or extension to remain constant), and trust-store limiting are all bad, terrible, no-good practices that cause havoc whenever they are implemented.

  • gruez 17 hours ago

    >As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain.

    like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.

    • greatgib 16 hours ago

      Let's suppose that I'm a competitor of Google and Amazon, and I want to have my Public root CA for mydomain.com to offer my clients subdomains like s3.customer1.mydomain.com, s3.customer2.mydomain.com,...

      • gruez 16 hours ago

        Why do you want this when there are wildcard certificates? That's how the hyperscalers do it as well. Amazon doesn't have a separate certificate for each s3 bucket, it's all under a wildcard certificate.

  • precommunicator 5 hours ago

    > everyone will be so used to certificates changing all the time, and no certificate pinning anymore

    Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.

    There are alternatives to pinning, DNS CAA records, monitoring CT logs.

  • yjftsjthsd-h 15 hours ago

    If you're in a position to pin certs, aren't you in a position to ignore normal CAs and just keep doing that?

nickf an hour ago

Don't forget the lede buried here - you'll need to re-validate control over your DNS names more frequently too. Many enterprises are used to doing this once-per-year today, but by the time 47-day certs roll around, you'll be re-validating all of your domain control every 10 days (more likely every week).

captn3m0 17 hours ago

This is great news. This would blow a hole in two interesting places where leaf-level certificate pinning is relied upon:

1. mobile apps.

2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.

  • DiggyJohnson 17 hours ago

    Do you (or anyone) recommend any text based resources laying out the state of enterprise TLS management in 2025?

    It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.

  • grishka 17 hours ago

    Isn't it usually the server's public key that's pinned? The key pair isn't regenerated when you renew the certificate.

    • toast0 17 hours ago

      Typical guidance is to pin the CA or intermediate, because in case of a key compromise, you're going to need to generate a new key.

      You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.

      What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.

      • nickf an hour ago

        I've said it up-thread, but never ever never never pin to anything public. Don't do it. It's bad. You, and even the CA have no control over the certificates and cannot rely on them remaining in any way constant. Don't do it. If you must pin, pin to private CAs you control. Otherwise, don't do it. Seriously. Don't.

trothamel 18 hours ago

Question: Does anyone have a good solution for renewing letsencrypt certificates for websites hosted on multiple servers? Right now, I have one master server that the others forward the well-known requests too, and then I copy the certificate over when I'm done, but I'm wondering if there's a better way.

  • hangonhn 17 hours ago

    We just use certbot on each server. Are you worried about the rate limit? LE rate limits based on the list of domains. So we send the request for the shared domain and the domain for each server instance. That makes each renew request unique per server for the purpose of the rate limit.

  • nullwarp 18 hours ago

    I use DNS verification for this then the server doesn't even need to be exposed to the internet.

  • noinsight 17 hours ago

    Orchestrate the renewal with Ansible - renew on the "master" server remotely but pull the new key material to your orchestrator and then push them to your server fleet. That's what I do. It's not "clean" or "ideal" to my tastes, but it works.

    It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.

    The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.

  • navigate8310 18 hours ago

    Have you tried certbot? Or if you want a turnkey solution, you may try Caddy or Traefik that have their own automated certificate generation utility.

  • throw0101b 17 hours ago

    getssl was written with a bit of a focus on this:

    > Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.

    * https://github.com/srvrco/getssl

  • dboreham 17 hours ago

    DNS verification.

CommanderData 27 minutes ago

Why bother with such a long staggered approach?

There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.

zephius 16 hours ago

Old SysAdmin and InfoSec Admin perspective:

Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.

  • tptacek 16 hours ago

    I think everybody involved knows about the likelihood that things are going to break at enterprise shops with super-expensive commercial middleboxes. They just don't care anymore. We ran a PKI that cared deeply about the concerns of admins for a decade and a half, and it was a fiasco. The coders have taken over, and things are better.

    • zephius 15 hours ago

      That's great for shops with Dev teams and in house developed platforms. Those shops are rare outside Silicon Valley and fortune 500s and not likely to increase beyond that. For the rest of us, we are at the mercy of off the shelf products and 3rd party platforms.

      • tptacek 15 hours ago

        I suggest you buy products from vendors who care about the modern WebPKI. I don't think the browser root programs are going to back down on this stuff.

        • nickf an hour ago

          This. Also, re-evaluate how many places you actually need public trust that the webPKI offers. So many times it isn't needed, and you make problems for yourself by assuming it does. I have horror stories I can't fully disclose, but if you have closed networks of millions of devices where you control both the server side and the client side, relying on the same certificate I might use on my blog is not a sane idea.

        • whs 7 hours ago

          Agree. My company was cloud first, and when we built the new HQ buying Cisco gear and VMware (as they're the only stack several implementers are offering) it felt like we were sending the company 15 years backwards

        • zephius 15 hours ago

          I agree, and we try, however that is not a currently widely supported feature in the boring industry specific business software/hardware space. Maybe now it will be, so time will tell.

  • cpach 16 hours ago

    Hardware vendors are simply incompetent and slow to adapt to security changes.

    Perhaps the new requirements will give them additional incentives.

    • zephius 15 hours ago

      Yeah, just like TLS 1.2 support. Don't even get me started on how that fiasco is still going.

  • yjftsjthsd-h 15 hours ago

    Sounds like everything is solvable via code, and the hardware vendors just suck at it.

    • zephius 15 hours ago

      In a nutshell, yes. From a security perspective, look at Fortinet as an egregious example of just how bad. Palo Alto also has some serious internal issues.

throwaway96751 17 hours ago

Off-topic: What is a good learning resource about TLS?

I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?

  • dextercd 15 hours ago

    I learned a lot from TLS Mastery by Michael W. Lucas.

1970-01-01 17 hours ago

Your 90-day snapshot backups will soon become 47-day backups. Take care!

  • gruez 17 hours ago

    ???

    Do people really backup their https certificates? Can't you generate a new one after restoring from backup?

  • belter 17 hours ago

    This is going to be one of the obvious traps.

    • DiggyJohnson 17 hours ago

      To care about stale certs on snapshots or the opposite?

      • belter 17 hours ago

        Both. One breaks your restore, the other breaks your trust chain.

_bin_ 18 hours ago

Is there an actual issue with widespread cert theft? That seems like the primary valid reason to do this, not forcing automation.

  • cryptonym 17 hours ago

    Let's Encrypt dropped support for OCSP. CRL doesn't scale well. Short lived certificate probably are a way to avoid certificate revocation quirks.

    • Ajedi32 17 hours ago

      It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.

      I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.

  • trothamel 18 hours ago

    I suspect it's to limit how long a malicious or compromised CA can impact security.

    • hedora 9 hours ago

      Equivalently, it also maximizes the number of sites impacted when a CA is compromised.

      It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)

    • rat9988 17 hours ago

      I think op is asking has there been many real case scenarios in practice that pushed for this change?

  • dboreham 17 hours ago

    I think it's more about revocation not working in practice. So the only solution is a short TTL.

  • chromanoid 18 hours ago

    I guess the main reason behind this move is platform capitalism. It's an easy way to cut off grassroots internet.

    • bshacklett 17 hours ago

      How does this cut off the grassroots internet?

      • chromanoid 17 hours ago

        It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.

        • icedchai 17 hours ago

          Many folks switched to Lets Encrypt ages ago. Certificates are way easier to acquire now than they were in "Frontpage' days. I remember paying 100's of dollars and sending a fax for "verification."

          • whs 7 hours ago

            Do they offer any long term commitment for the API though. I remembered that they were blocking old cert manager clients that were hammering their server. You can't automate that (as it could be unsafe, like Solarwinds) and they didn't give one year window to do it manually either.

            • icedchai 6 hours ago

              You do have a point. I still feel that upgrading your client is less work than manual cert renewals.

          • chromanoid 17 hours ago

            I agree, but I think the pendulum just went too far on the tradeoff scale.

    • jack0813 17 hours ago

      There are very convenient tools to do https easily these days, e.g. Caddy. You can use it to reverse proxy any http server and it will do the cert stuff for you automatically.

      • chromanoid 17 hours ago

        Ofc, but you have to be quite techsavy to know this and to set this up. It's also cumbersome in many low-tech situations. There is certificate revocation, I would really like to see the threat model here. I am not even sure if automation helps or just shifts the threat vector to certificate issuing.

    • gjsman-1000 17 hours ago

      If that were true, we would not have Let's Encrypt and tools which can give us certificates in 30 seconds flat once we prove ownership.

      The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)

      (Edit because I'm posting too fast, for the reply):

      > How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?

      Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.

      • nottorp 17 hours ago

        How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?

        Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?

        • icedchai 17 hours ago

          You are always dependent on a 3rd party to some extent: DNS registration, upstream ISP(s), cloud / hosting providers, etc.

          • nottorp 15 hours ago

            And now your list has 2 more items in it …

      • chromanoid 17 hours ago

        I dunno. Self-hosting w/o automation was feasible. Now you have to automate. It will lead to a huge amount of link rot or at least something very similar. There will be solutions but setting up a page e2e gets more and more complicated. In the end you want a service provider who takes care of it. Maybe not the worst thing, but what kind of security issues are we talking about? There is still certificate revocation...

        • icedchai 17 hours ago

          Have you tried caddy? Each TLS protected site winds up being literally a couple lines in a config file. Renewals are automatic. Unless you have a network / DNS problem, it is set and forget. It is far simpler than dealing with manual cert renewals, downloading the certificates, restarting your web server (or forgetting to...)

          • chromanoid 17 hours ago

            Yes, but only for internal stuff. I prefer traefik at the moment. But my point is more about how people use wix over free webspace and so on. While I don't agree with many of Jonathan Blow's arguments, but news like this make me think of his talk "Preventing the collapse of civilization" https://m.youtube.com/watch?v=ZSRHeXYDLko

raggi 17 hours ago

It sure would be nice if we could actually fix dns.

readthenotes1 17 hours ago

I wonder how many forums run by the barely able are going to disappear or start charging.

I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby

throw0101b 17 hours ago

Justification:

> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.

> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.

Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.

  • gruez 16 hours ago

    The "less trustworthy" refers to key compromise, not the e-shop going rogue and start scamming customers or whatever.

    • throw0101a 16 hours ago

      Okay, the key is compromised: that means they can MITM the trust relationship. But with modern algorithms you have forward security, so even if you've sniffed/captured the traffic it doesn't help.

      And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.

      • gruez 16 hours ago

        >And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.

        By that logic, we don't really need certificates, just TOFU.

        • throw0101d 16 hours ago

          > By that logic, we don't really need certificates, just TOFU.

          It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).

          • tptacek 15 hours ago

            It does not work well for SSH. We just don't care about how badly it works.

            • throw0101d 15 hours ago

              > It does not work well for SSH. We just don't care about how badly it works.

              How "should" it work? Is there a known-better way?

              • tptacek 15 hours ago

                Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).

                • throw0101d 14 hours ago

                  > Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).

                  I am aware of them.

                  As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.

                  Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).

                  So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.

                  • tptacek 14 hours ago

                    I don't know what to tell you. The problem with TOFU is obvious: the FU. The FU happens more often than people think it does (every time you log in from a new or reprovisioned workstation) and you're vulnerable every time. I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.

                    • throw0101d 14 hours ago

                      > I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.

                      It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).

                      Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.

aaomidi 17 hours ago

Good.

If you can't make this happen, don't use WebPKI and use internal PKI.

iJohnDoe 17 hours ago

Getting a bit ridiculous.

  • dboreham 17 hours ago

    Looks like a case where there are tradeoffs to be made, but the people with authority over the decision have no incentive to consider one side of the trade.

  • bayindirh 17 hours ago

    Why?

    • nottorp 17 hours ago

      The logical endgame is 30 second certificates...

      • krunck 17 hours ago

        Or maybe the endgame could be: creation of a centralized service that all web servers are required to be registered with and connected to at all times in order to receive their (frequently rotated) encryption keys. Controllers of said service then have kill switch control of any web service by simply withholding keys.

        • nottorp 17 hours ago

          Exactly. And all in the name of security! Think of the children!

      • bayindirh 17 hours ago

        For extremely sensitive systems, I think a more logical endgame is 30 minutes or so. 30 seconds is practically continuous generation.

        An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.

        Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.

        • nottorp 17 hours ago

          > once or twice a year makes sense

          You don't say. Why are the defaults already 90 days or less then?

          • bayindirh 17 hours ago

            Because most of the sites on the internet store much more sensitive information when compared to the sites I gave as an example, and can afford 1/2 certificates a year.

            90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.

            • nottorp 17 hours ago

              That's not the average website, that's a corporate website or an online store.

              Why do you think all the average web sites have to handle members?

              • bayindirh 17 hours ago

                Give me examples of websites which doesn’t have any kind of member system in place.

                Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.

                What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.

                So most of the sites have much more to protect than meets the eye.

                • ArinaS 44 minutes ago

                  > "Negligible amount."

                  Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.

                • nottorp 17 hours ago

                  I could move that all your examples except forums do not NEED members or users... except to spy on you and spam you.

                  • bayindirh 16 hours ago

                    I mean, a news site needs their journalists to login. Your own personal Wordpress needs a user for editing the site. The blog platform I use (mataroa) doesn’t even have detailed statistics serve many users so they need user support.

                    Web is a bit different than you envision/think.

                    • ArinaS 43 minutes ago

                      > "I mean, a news site needs their journalists to login."

                      Why can't this site just upload HTML files to their web server?

        • panki27 17 hours ago

          That CRL is going to be HUGE.

          • psz 15 hours ago

            Why you think so? Keep in mind that revoked certs are not included in CRLs once expired (because they are not valid any more).

    • jodrellblank 17 hours ago

      https://mathematicalcrap.com/2022/08/14/the-great-loyalty-oa...

      "When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."

belter 17 hours ago

Are the 47 days to please the current US Administration?

  • eesmith 17 hours ago

    Based on the linked-to page, no:

        47 days might seem like an arbitrary number, but it’s a simple cascade:
    
        * 200 days = 6 maximal month (184 days) + 1/2 30-day month (15 days) + 1 day wiggle room
        * 100 days = 3 maximal month (92 days) + ~1/4 30-day month (7 days) + 1 day wiggle room
        * 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
junaru 17 hours ago

> For this reason, and because even the 2027 changes to 100-day certificates will make manual procedures untenable, we expect rapid adoption of automation long before the 2029 changes.

Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.

  • panki27 17 hours ago

    People will just roll out almost forever-lasting certificates through their internal CA for all systems that are not publicly reachable.

    • throw0101d 17 hours ago

      > through their internal CA

      Nope. People will create self-signed certs and tell people to just click "accept".

zelon88 17 hours ago

This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.

They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.

This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."

  • chowells 17 hours ago

    I think it's absolutely critical when I'm sending a password to a site that it's actually the site it claims to be. That's identity. It matters a lot.

    • zelon88 15 hours ago

      Not to users. The user who types Wal-Mart into their address bar expects to communicate with Wal-Mart. They aren't going to check if the certificate matches. Only that the icon is green.

      This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.

      And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.

      So you see, Identity isn't the value that people expect from a certificate. It's the encryption.

      Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.

      • chowells 14 hours ago

        Well, no. That's just not true.

        I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.

        Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.

        Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.

        • BrandoElFollito 14 hours ago

          You use words that are alien to everyone. Well, there is a small incertainity in "everyone" and it is there where the people who actually understand DHCP, DoS, etc. live. This is a very, very small place.

          So no, nobody will ever look at a certificate.

          When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.

          • chowells 13 hours ago

            Who said a word about looking at a certificate?

            I said exactly the words I meant.

            > I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.

            Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.

            That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.

            • hedora 9 hours ago

              When you say identity, you mean “the identity of someone that convinced a certificate authority that they controlled walmart.com’s dns record at some point in the last 47 days, or used some sort of out of band authentication mechanism”.

              You don’t mean “Walmart”, but 99% of the population thinks you do.

              Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.

              • chowells 9 hours ago

                So what you're saying is that you actually understand the identity portion is critical to how the web is used and you're just cranky. It's ok. Take a walk, get a bite to eat. You'll feel better.

  • gruez 16 hours ago

    >This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.

    "example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.

    [1] https://web.archive.org/web/20171222000208/https://stripe.ia...

    >This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."

    Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.