You have a mongoDB replica set running as a stateful set on kubernetes and you need to expose it as an external service. If you use loadbalancer service it will select one of the mongo pods randomly so you can be redirected to a secondary node which is read only.
Use mongo labeler sidecar that will check which pod is primary and will add primary=true label so you can use it in your service definition as a selector.
apiVersion: v1
kind: Service
metadata:
name: mongo-external
labels:
name: mongo
spec:
type: LoadBalancer
ports:
- name: mongo
port: 27017
selector:
role: mongo
primary: "true"
Pod labeler will aultomatically detect kubernetes config wile running inside the cluster but if you want to test it outside it assumes that your k8s config is stored in ~/.kube/config and mongo runs on localhost:27017
You can use kubectl port-forward mongo-0 27017 command for testing purposes.
LABEL_SELECTOR - labels that describe your mongo deployment
NAMESPACE - restricts where to look for mongo pods
DEBUG - when set to true increases log verbosity
Example:
env:
- name: LABEL_SELECTOR
value: "role=mongo,environment=dev"
- name: NAMESPACE
value: "dev"
- name: DEBUG
value: "true"
please use included Dockerfile to build your own
See deployment-example.yaml for a complete example of how to deploy MongoDB with the labeler sidecar.
-
Label Updates: The labeler uses
Patchinstead ofUpdateto modify pod labels, which prevents conflicts with other controllers and is more efficient. -
Security: The container image uses distroless base image and runs as non-root user (UID 65532). The deployment example includes proper seccomp profile configuration (
RuntimeDefault) to ensure compatibility with modern Kubernetes security policies. -
RBAC: The sidecar requires the following permissions:
get,list,patchon pods in the target namespace
-
Seccomp Profile: The deployment example includes
seccompProfile.type: RuntimeDefaultwhich is required for Kubernetes 1.19+ with PodSecurityPolicy/PodSecurity standards enabled.