Hi team,
I am working on integrating a remote kubernetes cluster, but getting below errors and easy vpn container is not able to spun up and crashing, I am using kubernetes cluster deployed on Virtuozzo VHI infrastructure which is working fine.
19:00:08.104 Information Starting task execution: "Connecting to Kubernetes" 19:00:08.282 Information Installing WG-Easy helm chart on the cluster... 19:00:09.354 Information Making the WG-EASY service types LoadBalancer instead of ClusterIP... 19:00:09.539 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (1) 19:00:10.852 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (2) 19:00:12.554 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (3) 19:00:14.770 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (4) 19:00:17.640 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (5) 19:00:21.360 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (6) 19:00:26.194 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (7) 19:00:32.478 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (8) 19:00:40.642 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (9) 19:00:51.259 Warning Couldn't get The WG-EASY services external-ip information!. Exception: Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. Retrying... (10) 19:01:05.155 Information Exception of type 'Volo.Abp.Studio.AbpStudioException' was thrown. 19:01:05.155 Information Code:AbpStudio:K8sServiceExternalInfoNotFound 19:01:05.155 Information ---------- Exception Data ---------- ServiceName = ABP-WG-EASY
19:01:05.175 Information Completed task execution: "Connecting to Kubernetes | Making the ABP-WG-EASY-Vpn service type LoadBalancer..."
waqar@Waqars-MacBook-Pro ~ % kubectl describe pods abp-wg-easy-7d44c4b5fc-852bj -n waqar
Name: abp-wg-easy-7d44c4b5fc-852bj
Namespace: waqar
Priority: 0
Service Account: default
Node: abpkubeclusterv2-dtvywrj42ei2-node-0/10.0.0.12
Start Time: Mon, 30 Jun 2025 19:00:10 +0500
Labels: app=wg-easy-5.0.0
app.kubernetes.io/instance=abp-wg-easy
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=wg-easy
app.kubernetes.io/version=latest
helm-revision=1
helm.sh/chart=wg-easy-5.0.0
pod-template-hash=7d44c4b5fc
pod.name=main
release=abp-wg-easy
Annotations: rollme: iBVIi
Status: Running
IP: 10.100.1.218
IPs:
IP: 10.100.1.218
Controlled By: ReplicaSet/abp-wg-easy-7d44c4b5fc
Containers:
abp-wg-easy:
Container ID: containerd://9834d746465c6830097af720a0de8afe6c95bbe8c9637d63a8347bad82350b74
Image: docker.io/weejewel/wg-easy:7@sha256:a756cfded755bca8391fa90e8f5945e74f7e50e4370840647c5b578d694b32cd
Image ID: docker.io/weejewel/wg-easy@sha256:a756cfded755bca8391fa90e8f5945e74f7e50e4370840647c5b578d694b32cd
Ports: 51715/TCP, 51820/UDP
Host Ports: 0/TCP, 0/UDP
SeccompProfile: RuntimeDefault
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 30 Jun 2025 19:47:10 +0500
Finished: Mon, 30 Jun 2025 19:47:10 +0500
Ready: False
Restart Count: 14
Limits:
cpu: 4
memory: 8Gi
Requests:
cpu: 10m
memory: 50Mi
Liveness: tcp-socket :51715 delay=10s timeout=5s period=10s #success=1 #failure=5
Readiness: tcp-socket :51715 delay=10s timeout=5s period=10s #success=2 #failure=5
Startup: tcp-socket :51715 delay=10s timeout=2s period=5s #success=1 #failure=60
Environment:
TZ: UTC
UMASK: 0022
UMASK_SET: 0022
PUID: 0
USER_ID: 0
UID: 0
PGID: 568
GROUP_ID: 568
GID: 568
PASSWORD: 82edfe6d
PORT: 51715
WG_ALLOWED_IPS: 0.0.0.0/0, ::/0
WG_DEFAULT_ADDRESS: 10.8.0.x
WG_DEFAULT_DNS: 1.1.1.1
WG_HOST: abp-wg-easy-vpn
WG_MTU: 1420
WG_PERSISTENT_KEEPALIVE: 0
WG_PORT: 52833
Mounts:
/dev/shm from devshm (rw)
/etc/wireguard from config (rw)
/shared from shared (rw)
/tmp from tmp (rw)
/var/logs from varlogs (rw)
/var/run from varrun (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: abp-wg-easy-config
ReadOnly: false
devshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
varlogs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
varrun:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 48m default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. Normal Scheduled 48m default-scheduler Successfully assigned waqar/abp-wg-easy-7d44c4b5fc-852bj to abpkubeclusterv2-dtvywrj42ei2-node-0 Normal SuccessfulAttachVolume 48m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-3d78dbcd-c2b8-46d2-9479-1e73ecfa42a2" Normal Started 48m (x4 over 48m) kubelet Started container abp-wg-easy Normal Pulled 47m (x5 over 48m) kubelet Container image "docker.io/weejewel/wg-easy:7@sha256:a756cfded755bca8391fa90e8f5945e74f7e50e4370840647c5b578d694b32cd" already present on machine Normal Created 47m (x5 over 48m) kubelet Created container abp-wg-easy Warning BackOff 3m35s (x228 over 48m) kubelet Back-off restarting failed container abp-wg-easy in pod abp-wg-easy-7d44c4b5fc-852bj_waqar(5a153168-e87f-412a-96b6-533b09e94f81) waqar@Waqars-MacBook-Pro ~ % kubectl get pods -n waqar NAME READY STATUS RESTARTS AGE abp-wg-easy-7d44c4b5fc-852bj 0/1 CrashLoopBackOff 14 (100s ago) 48m waqar@Waqars-MacBook-Pro ~ % kubectl describe pods abp-wg-easy-7d44c4b5fc-852bj -n waqar Name: abp-wg-easy-7d44c4b5fc-852bj
4 Answer(s)
-
0
To resolve the CrashLoopBackOff issue with the abp-wg-easy container when integrating ABP Studio with a remote Kubernetes cluster, follow these steps:
Check PersistentVolumeClaim (PVC) Status:
- The pod event log shows:
pod has unbound immediate PersistentVolumeClaims
. This means the required PVC (abp-wg-easy-config
) is not bound to a PersistentVolume. - Run:
Ensure the PVC is in thekubectl get pvc -n waqar
Bound
state. If not, you need to create a suitable PersistentVolume or check your storage class configuration.
- The pod event log shows:
Review StorageClass and Volume Provisioning:
- Make sure your Kubernetes cluster supports dynamic volume provisioning and has a default StorageClass. If not, you may need to manually create a PersistentVolume that matches the PVC requirements.
Check Container Logs for More Details:
- Get the logs for the crashing container:
Look for specific errors related to configuration, permissions, or storage.kubectl logs abp-wg-easy-7d44c4b5fc-852bj -n waqar
- Get the logs for the crashing container:
Verify Resource Limits:
- The container requests 10m CPU and 50Mi memory but has limits set to 4 CPU and 8Gi memory. Ensure your cluster nodes have enough resources available.
Network and Service Type:
- ABP Studio attempts to set the WG-EASY service type to LoadBalancer. If your Virtuozzo VHI infrastructure does not support external LoadBalancers, the service may not get an external IP, causing connection failures.
- Check if your cluster supports LoadBalancer services. If not, consider using NodePort or configuring an ingress controller.
WireGuard Password Consistency:
- If you have previously installed WireGuard VPN on this cluster, ensure you use the same password in the ABP Studio Kubernetes Profile → Secrets tab. Otherwise, the connection will fail.
Clean Up and Retry:
- If the pod is stuck, try deleting the pod and PVC, then reconnect via ABP Studio to let it re-provision resources.
Cluster Compatibility:
- Ensure your Kubernetes version and infrastructure are compatible with the requirements of the wg-easy Helm chart and ABP Studio.
By addressing the PVC binding issue and ensuring your cluster supports the required service types and storage, the CrashLoopBackOff should be resolved.
Sources:
- https://abp.io/docs/latest/studio/kubernetes
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
-
0
I have check/validated and applied all above but issue persists.
waqar@Waqars-MacBook-Pro ~ % kubectl get pvc -n waqar NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE abp-wg-easy-config Bound pvc-3d78dbcd-c2b8-46d2-9479-1e73ecfa42a2 100Gi RWO cinder-csi <unset> 54m
waqar@Waqars-MacBook-Pro ~ % kubectl logs abp-wg-easy-7d44c4b5fc-852bj -n waqar exec /usr/local/bin/docker-entrypoint.sh: exec format error (since the container is in crash state)
Rest all points I have already tested but issue does not get resolved.
-
0
Hello Waqar,
Sorry for the late reply. I'm currently working on your issue. While trying to reproduce the problem, I discovered a different bug, which I'm addressing at the moment. Once that's resolved, I’ll continue working on reproducing your issue. If I'm still unable to reproduce it, I may reach out to request some additional information. Thank you for your understanding.
-
0