You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running into issues where I cannot configure the ingress to access the CVAT front/back-end. I have followed the documentation for CVAT deployment on k8s using helm. Here are some of the logs in the process of configuring the cluster.
helm upgrade -n cvat test -i --create-namespace ./helm-chart -f ./helm-chart/values.yaml -f ./helm-chart/values.override.yaml
Release "test" does not exist. Installing it now.
walk.go:74: found symbolic link in path: /.../helm-chart/analytics resolves to /.../components/analytics. Contents of linked file included and used
NAME: test
LAST DEPLOYED: Tue Jul 2 13:50:33 2024
NAMESPACE: cvat
STATUS: deployed
REVISION: 1
I am expecting a response from the cvat.local DNS upon sending requests through ping or curl.
Possible Solution
No response
Context
This is the values.yaml file I used for context:
traefik:
enabled: trueservice:
externalIPs:
- "192.168.49.2"#add minikube ip when testing locally.ingress:
enabled: true
The values.yaml file looks like this:
# Default values for cvat.# This is a YAML-formatted file.# Declare variables to be passed into your templates.imagePullSecrets: []nameOverride: ""fullnameOverride: ""cvat:
backend:
labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []# -- The service account the backend pods will use to interact with the Kubernetes APIserviceAccount:
name: defaultinitializer:
labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []server:
replicas: 1labels: {}annotations: {}resources: {}affinity: {}tolerations: []envs:
ALLOWED_HOSTS: "*"additionalEnv: []additionalVolumes: []additionalVolumeMounts: []worker:
export:
replicas: 2labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []import:
replicas: 2labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []annotation:
replicas: 1labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []webhooks:
replicas: 1labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []qualityreports:
replicas: 1labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []analyticsreports:
replicas: 1labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []utils:
replicas: 1labels: {}annotations: {}resources: {}affinity: {}tolerations: []additionalEnv: []additionalVolumes: []additionalVolumeMounts: []replicas: 1image: cvat/servertag: devimagePullPolicy: AlwayspermissionFix:
enabled: trueservice:
annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"spec:
type: ClusterIPports:
- port: 8080targetPort: 8080protocol: TCPname: httpdefaultStorage:
enabled: true# storageClassName: default# accessModes:# - ReadWriteManysize: 20GidisableDistinctCachePerService: falsefrontend:
replicas: 1image: cvat/uitag: devimagePullPolicy: Alwayslabels: {}# test: testannotations: {}# test.io/test: testresources: {}affinity: {}tolerations: []# nodeAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# nodeSelectorTerms:# - matchExpressions:# - key: kubernetes.io/e2e-az-name# operator: In# values:# - e2e-az1# - e2e-az2additionalEnv: []# Example:# - name: volume-from-secret# - name: TEST# value: "test"additionalVolumes: []# Example(assumes that pvc was already created):# - name: tmp# persistentVolumeClaim:# claimName: tmpadditionalVolumeMounts: []# Example:# - mountPath: /tmp# name: tmp# subPath: testservice:
type: ClusterIPports:
- port: 80targetPort: 80protocol: TCPname: httpopa:
replicas: 1image: openpolicyagent/opatag: 0.63.0imagePullPolicy: IfNotPresentlabels: {}# test: testannotations: {}# test.io/test: testresources: {}affinity: {}tolerations: []# nodeAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# nodeSelectorTerms:# - matchExpressions:# - key: kubernetes.io/e2e-az-name# operator: In# values:# - e2e-az1# - e2e-az2additionalEnv: []# Example:# - name: volume-from-secret# - name: TEST# value: "test"additionalVolumes: []# Example(assumes that pvc was already created):# - name: tmp# persistentVolumeClaim:# claimName: tmpadditionalVolumeMounts: []# Example:# - mountPath: /tmp# name: tmp# subPath: testcomposeCompatibleServiceName: true # Sets service name to opa in order to be compatible with Docker Compose. Necessary because changing IAM_OPA_DATA_URL via environment variables in current images. Hinders multiple deployment due to duplicate nameservice:
type: ClusterIPports:
- port: 8181targetPort: 8181protocol: TCPname: httpkvrocks:
enabled: trueexternal:
host: kvrocks-external.localdomainexistingSecret: "cvat-kvrocks-secret"secret:
create: truename: cvat-kvrocks-secretpassword: cvat_kvrocksimage: apache/kvrockstag: 2.7.0imagePullPolicy: IfNotPresentlabels: {}# test: testannotations: {}# test.io/test: testresources: {}affinity: {}tolerations: []nodeAffinity: {}# requiredDuringSchedulingIgnoredDuringExecution:# nodeSelectorTerms:# - matchExpressions:# - key: kubernetes.io/e2e-az-name# operator: In# values:# - e2e-az1# - e2e-az2additionalEnv: []# Example:# - name: TEST# value: "test"additionalVolumes: []# Example(assumes that pvc was already created):# - name: tmp# persistentVolumeClaim:# claimName: tmpadditionalVolumeMounts: []# Example:# - mountPath: /tmp# name: tmp# subPath: testdefaultStorage:
enabled: true# storageClassName: default# accessModes:# - ReadWriteOncesize: 100Gipostgresql:
#See https://github.com/bitnami/charts/blob/master/bitnami/postgresql/ for more infoenabled: true # false for external dbexternal:
# Ignored if an empty value is sethost: ""# Ignored if an empty value is setport: ""# If not external following config will be applied by defaultauth:
existingSecret: "{{ .Release.Name }}-postgres-secret"username: cvatdatabase: cvatservice:
ports:
postgresql: 5432secret:
create: truename: "{{ .Release.Name }}-postgres-secret"password: cvat_postgresqlpostgres_password: cvat_postgresql_postgresreplication_password: cvat_postgresql_replica# https://artifacthub.io/packages/helm/bitnami/redisredis:
enabled: trueexternal:
host: 127.0.0.1architecture: standaloneauth:
existingSecret: "cvat-redis-secret"existingSecretPasswordKey: passwordsecret:
create: truename: cvat-redis-secretpassword: cvat_redis# TODO: persistence optionsnuclio:
enabled: false# See https://github.com/nuclio/nuclio/blob/master/hack/k8s/helm/nuclio/values.yaml for more info# registry:# loginUrl: someurl# credentials:# username: someuser# password: somepassanalytics:
# Set clickhouse.enabled to false if you disable analytics or use an external databaseenabled: trueclickhouseDb: cvatclickhouseUser: userclickhousePassword: userclickhouseHost: "{{ .Release.Name }}-clickhouse"clickhousePort: 8123vector:
envFrom:
- secretRef:
name: cvat-analytics-secretexistingConfigMaps:
- cvat-vector-configdataDir: "/vector-data-dir"containerPorts:
- name: httpcontainerPort: 80protocol: TCPservice:
ports:
- name: httpport: 80protocol: TCPimage:
tag: "0.26.0-alpine"clickhouse:
# Set to false in case of external db usageenabled: trueshards: 1replicaCount: 1extraEnvVarsSecret: cvat-analytics-secretinitdbScriptsSecret: cvat-clickhouse-initauth:
username: userexistingSecret: cvat-analytics-secretexistingSecretKey: CLICKHOUSE_PASSWORD# Consider enabling zookeeper if a distributed configuration is usedzookeeper:
enabled: falsegrafana:
envFromSecret: cvat-analytics-secretdatasources:
datasources.yaml:
apiVersion: 1datasources:
- name: 'ClickHouse'type: 'grafana-clickhouse-datasource'isDefault: truejsonData:
defaultDatabase: ${CLICKHOUSE_DB}port: ${CLICKHOUSE_PORT}server: ${CLICKHOUSE_HOST}username: ${CLICKHOUSE_USER}tlsSkipVerify: falseprotocol: httpsecureJsonData:
password: ${CLICKHOUSE_PASSWORD}editable: falsedashboardProviders:
dashboardproviders.yaml:
apiVersion: 1providers:
- name: 'default'orgId: 1folder: ''type: filedisableDeletion: falseeditable: trueoptions:
path: /var/lib/grafana/dashboardsdashboardsConfigMaps:
default: "cvat-grafana-dashboards"plugins:
- grafana-clickhouse-datasourcegrafana.ini:
server:
root_url: https://cvat.local/analyticsdashboards:
default_home_dashboard_path: /var/lib/grafana/dashboards/default/all_events.jsonusers:
viewers_can_edit: trueauth:
disable_login_form: truedisable_signout_menu: trueauth.anonymous:
enabled: trueorg_role: Adminauth.basic:
enabled: falseingress:
## @param ingress.enabled Enable ingress resource generation for CVAT##enabled: false## @param ingress.hostname Host for the ingress resource##hostname: cvat.local## @param ingress.annotations Additional annotations for the Ingress resource.#### e.g:## annotations:## kubernetes.io/ingress.class: nginx##annotations: {}## @param ingress.className IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/##className: ""## @param ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter## TLS certificates will be retrieved from a TLS secret defined in tlsSecretName parameter##tls: false## @param ingress.tlsSecretName Specifies the name of the secret containing TLS certificates. Ignored if ingress.tls is false##tlsSecretName: ingress-tls-cvattraefik:
enabled: falselogs:
general:
format: jsonaccess:
enabled: trueformat: jsonfields:
general:
defaultmode: dropnames:
ClientHost: keepDownstreamContentSize: keepDownstreamStatus: keepDuration: keepRequestHost: keepRequestMethod: keepRequestPath: keepRequestPort: keepRequestProtocol: keepRouterName: keepStartUTC: keepproviders:
kubernetesIngress:
allowEmptyServices: truesmokescreen:
opts: ''
I have ensured that the external IP for the k8s cluster as well as the DNS is properly configured. Here are the logs:
minikube ip
$ 192.168.49.2
kubectl config current-context
$ minikube
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:# [network]# generateHosts = false
127.0.0.1 localhost
127.0.1.1 mylaptop
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.49.2 cvat.local
Actions before raising this issue
Steps to Reproduce
I am running into issues where I cannot configure the ingress to access the CVAT front/back-end. I have followed the documentation for CVAT deployment on k8s using helm. Here are some of the logs in the process of configuring the cluster.
ping cvat.local PING cvat.local (192.168.49.2) 56(84) bytes of data. ^C --- cvat.local ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3094ms
Expected Behavior
I am expecting a response from the cvat.local DNS upon sending requests through ping or curl.
Possible Solution
No response
Context
This is the values.yaml file I used for context:
The values.yaml file looks like this:
I have ensured that the external IP for the k8s cluster as well as the DNS is properly configured. Here are the logs:
Environment
The text was updated successfully, but these errors were encountered: