this post was submitted on 16 Jun 2023
3 points (100.0% liked)

Selfhosted

39435 readers
2 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

EDIT: Issue now resolved. Turns out that having an A record point to a DNS server probably wasn't the best idea. My best theory here is that A records pointing to DNS servers means "Find the authority on this domain at this other DNS server", which could never resolve. By pointing it to my VPS, the DNS could resolve to a definitive IP, and the certs were successfully generated.

Hi all, hope someone can help as I'm just confused now!

Long story short I want to host local services (like ntfy) using trusted certificates. I hoped to do this with Caddy and a wildcard domain (I don't want to expose the DNS records of the services I'm running if not necessary).

In my DNS I have an A record for *.local.example.com pointing at a semi-random IP. I have other services on a VPS on other subdomains so I can't just use a wildcard. This looks like:

blog  A  <VPS IP>
*.local  A  1.1.1.1

On the server in my home network (which I do not want to expose) I have dnsmasq running that is handling local DNS records for services on the LAN but carefully not the remote services on the same domain. Using dig I can see that the local and remote DNS are working as expected. Seeing the error on DNS-01 challenged "could not determine zone for domain "_acme-challenge.local.example.com" I have also added an exception in my local DNS for _acme-challenge.local to point to cloudflare's DNS at 1.1.1.1. The dig command confirms this works as expected after restarting dnsmasq.

With the following Caddyfile:

*.local.example.com {
        tls {
                dns <dns provider plugin> <API token>
        }

        @ntfy host ntfy.local.example.com
        handle @ntfy {
                reverse_proxy ntfy
        }
}

Every DNS-01 challenge fails with "...solving challenges: presenting for challenge: could not determine zone for domain "_acme-challenge.local.example.com"...".

I think this should be possible, but I'm not clear what I'm missing so any help greatly appreciated. I'm just dipping my toes into self-hosting and actually getting practical use out of my Raspberry Pi that's been collecting dust for years.

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 year ago (1 children)

I assume you have purchased as public domain (the example.com bit) and have it setup to be publicly resolvable, even if the records are hosted on cloudflare or something.

You don't need any A records for the dns01 challenge from lets encrypt. You need a text record for _acme-challenge.local.example.com that you can update with what ever challenge string let's encrypt replies with when you request the *.local.example.com certificate.

Guessing the error is from caddy and it is saying it can't find the public provider of that zone to update the txt record for the challenge. Even if you have the correct provider configured, does local.example.com exist in the public DNS server config?

As a side note, after the cert is issued the _acme-challenge txt record can be deleted, just be aware all issued public certs are easily searchable by domain name.

[–] Piatro 1 points 1 year ago (1 children)

Yes I've got the domain I just replaced example.com for explanation purposes. Yes I know public certs are easily searchable which is why I'm trying to use a wildcard domain (*.local) which is public. Caddy should be handling the domain record updates as required but I would assume that I'd see an error from the API request to update the record before seeing the cert failure. Maybe it's silently failing. I'll check if possible.

[–] [email protected] 2 points 1 year ago (1 children)

Rereading what you have in the zone file, if that is a standard bind zone file, a subzone definition would look like

` ; sub-domain definitions $ORIGIN local.example.com.

  • IN A 1.1.1.1 `

What you have might work, but doesn't follow the dns RFCs the dns label is "*.local" in the "example.com" zone/domain.

This may come up after you get the API to the public DNS provider working, as the software will add/update a "_acme-challenge" label in the zone you point it to which would be "example.com"

If the dns provider makes setting up a proper subzones hard, you can work around it by adding a cname record

_acme-challenge.local in CNAME _acme-challenge.example.com

[–] Piatro 1 points 1 year ago (1 children)

Thanks for the suggestion. That wasn't a standard format I was just trying to write them out in a way that represented what I was seeing in my DNS controller and now realise it probably would have been clearer as a table. I honestly wasn't sure if *.local would work either but it's working great now.

[–] [email protected] 1 points 1 year ago

Wildcard DNS entries are not part of an RFC afaik, so the behavior is completely determined by the dns software in use. AD and I think bind state to only use them in an otherwise empty zones, though one case I have at work we have to have the wild and an A record in the zone. Hit strange intermittent failures to resolve without the record in for some reason.

[–] [email protected] 2 points 1 year ago (1 children)

Are.you able to identify what dns provider youa re using, as I read the error as being related to the cert resolver not being able to access the correct zone from the DNS provider. I am using cloudflare and the Caddy file looks pretty similar to mine, so I aren't sure the issue is there.

One other thing to try is to restarts caddy, I found that sometimes reloading my caddy file wasn't enough, and thing seemed to stay working after I restarted the docker image

[–] Piatro 1 points 1 year ago (2 children)

Yes it's ionos. I think from the other comment and the fact my DNS hasn't been changed (I'd assume I should be able to see the acme challenge record if it was successful) the DNS integration seems to be the culprit. Not sure how to fix it though!

[–] [email protected] 2 points 1 year ago (1 children)

Does Caddy come with the ionos dns challenge plugin built into it or do you need to compile it with the plugin?

https://caddyserver.com/docs/build#xcaddy

[–] Piatro 1 points 1 year ago

No but it's an important step I didn't cover in the post so good spot. I've solved my issue now, see the edit in the post.

[–] Piatro 1 points 1 year ago

So I put debug mode on and I see no requests to Ionos which seems like it's the main problem.

[–] [email protected] 2 points 1 year ago (1 children)

According to the ionos api documentation, the API key is formatted as publicprefix.secret

Is that how you entered it in your config?

https://developer.hosting.ionos.com/docs/getstarted

[–] Piatro 1 points 1 year ago

Yes, thanks.