I ran into a small problem recently when I was leveraging my site updating code referenced in Automating Static WordPress Updates. The problem was that I was unable to update content reliably for two reasons:
- The content was not properly switching out the hostname in the URL when I would crawl my backend WordPress site. I actually implemented something that helped to correct this but it lead to problem #2. I should probably post a new article on the changes I made in my script…
- My script would only crawl the external static site so updates were not getting published. This lead me to creating this post!
Now that I have the problems covered, let’s get right to it. In order to resolve the issue, I needed my Kubernetes to have split DNS for certain hosts. I needed my static site updating script to be able to crawl my backend WordPress and NOT crawl the public facing static site.
Edit coreDNS’s configmap
In order to add a custom entry to your Kubernetes, you can simply edit coredns’s configmap and add a new hosts
entry. Here is my current coredns configmap:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import custom/*.override
}
import custom/*.server
kind: ConfigMap
Based upon the hosts
information provided by CoreDNS, we just add a new host block and life will be good, right? I noticed that in the documentation, I see This plugin only supports A, AAAA, and PTR records
. That’s not going to work since I’m wanting to point to a new hostname. Instead, we’ll use the <a href="https://coredns.io/plugins/rewrite/" rel="noreferrer noopener" target="_blank">rewrite</a>
syntax. Below is my updated CoreDNS configmap.
apiVersion: v1
data:
Corefile: ".:53 {
errors
health
rewrite stop {
name exact live-blog.shellnetsecurity.com. nginx-npp.wordpress.svc.cluster.local
answer name nginx-npp.wordpress.svc.cluster.local. live-blog.shellnetsecurity.com
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import
custom/*.override
}
import custom/*.server"
kind: ConfigMap
Fixed and no restart required like with using hosts