Conversation
전체 요약GitHub Actions 워크플로우 배포 전략을 행렬 기반으로 재설계하고, Kubernetes 개발 환경을 위한 전체 Argo CD 애플리케이션 매니페스트, Helm 값 파일, 그리고 GCP 인프라 스택(VPC, 컴퓨팅 리소스, 로드 밸런서, 스토리지, IAP)을 정의하는 Terraform 설정을 추가합니다. 변경사항
예상 코드 검토 노력🎯 4 (복잡) | ⏱️ ~60분 관련 가능성 있는 PR
축하 시
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Terraform Plan -
|
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
k8s-argocd/root-apps/root-prod.yaml (1)
31-33:⚠️ Potential issue | 🟡 Minor주석이 설정값과 모순됩니다.
Line 32의 주석 "재귀적으로 하위 디렉토리도 포함"은
recurse: false와 상충합니다. 현재 구조는 다단계 app-of-apps 패턴이므로:
- root-prod.yaml (recurse=false): 최상위 app.yaml, monitoring.yaml, platform.yaml만 로드
- 각 파일은 독립적인 Application으로, 자신의 하위 디렉토리를 관리
실제 동작이 맞으므로 주석을 수정하세요.
directory: - # 재귀적으로 하위 디렉토리도 포함 + # 최상위 디렉토리의 Application 매니페스트만 포함 recurse: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s-argocd/root-apps/root-prod.yaml` around lines 31 - 33, 주석과 설정값이 모순됩니다: in the directory block the property recurse is set to false but the comment says "재귀적으로 하위 디렉토리도 포함". Update the comment to match the actual setting (recurse: false) — e.g., change the comment to indicate only top-level apps are included or note that each top-level Application manages its own subdirectories; this touches the directory -> recurse setting in root-prod.yaml.
🧹 Nitpick comments (10)
k8s-argocd/applications/dev/app.yaml (1)
20-26:app-root-dev도 revision 고정과 Project 분리를 맞추는 게 좋습니다.Line 25의
main추적과 Line 20의default프로젝트 사용은 변경 통제/최소권한 측면에서 동일한 리스크를 가집니다. 루트 앱 간 정책을 통일해 주세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s-argocd/applications/dev/app.yaml` around lines 20 - 26, The app-root-dev manifest uses an unfixed revision and the default project; update the app definition for app-root-dev so it matches the root-app policy: change the project (the project: field) from "default" to the dedicated root application project used for controllers (i.e., the same project name used by other root apps) and pin source.targetRevision from "main" to an immutable reference (a release tag or specific commit SHA) so the app uses a fixed revision; verify the change in the block containing source.repoURL and path: k8s-argocd/applications/dev/app.k8s-argocd/applications/dev/platform.yaml (1)
20-26:targetRevision: main과project: default조합을 고정 버전과 전용 프로젝트로 분리하세요.Line 20의
default프로젝트와 Line 25의main브랜치 추적은 재현성과 RBAC 측면에서 취약합니다. 고정된 git 태그 또는 커밋 SHA와 전용 AppProject(예:dev-platform)로 변경하면 의도하지 않은 변경 배포를 방지하고 권한 범위를 제한할 수 있습니다.예시 변경안
spec: - project: default + project: dev-platform source: repoURL: https://github.com/PinHouse/PinHouse_CLOUD - targetRevision: main + targetRevision: <git-tag-or-commit-sha> path: k8s-argocd/applications/dev/platform주의: 이 패턴이 전체 k8s-argocd 구조에 널리 적용되어 있으므로, 일관된 정책 수립 후 단계적 적용을 권장합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s-argocd/applications/dev/platform.yaml` around lines 20 - 26, 현재 manifest uses project: default and source.targetRevision: main which is brittle; create a dedicated ArgoCD AppProject (e.g., dev-platform) and update the Application to reference project: dev-platform, and replace targetRevision: main with a fixed git tag or commit SHA (or annotated release ref) to ensure immutability; also ensure the new AppProject (dev-platform) is defined with tightened RBAC/allowedRepositories/namespace/resourceRestrictions to scope permissions for the Application and update any CI/CD docs that deploy this Application to use the new tag/sha and project name.k8s-argocd/root-apps/root-dev.yaml (1)
22-24: 루트 앱은 고정 리비전(태그 또는 커밋 SHA)으로 관리하는 것을 권장합니다.현재
root-dev.yaml과root-prod.yaml모두targetRevision: main을 사용하고 있습니다. App-of-Apps 패턴에서 루트 앱이 이동 브랜치를 추적하면, Application 정의 변경이 즉시 반영되어 특정 시점에서의 상태 재현과 롤백 추적성이 어려워집니다. dev 환경이라도 루트 앱 리비전 관리는 일관된 전략을 권장합니다.예시 변경안
- targetRevision: main + targetRevision: <고정된-태그-또는-커밋-SHA>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s-argocd/root-apps/root-dev.yaml` around lines 22 - 24, The root ArgoCD applications (root-dev.yaml and root-prod.yaml) currently use a moving branch via targetRevision: main; change targetRevision to a fixed immutable ref (a release tag or specific commit SHA from the repoURL https://github.com/PinHouse/PinHouse_CLOUD) so the App-of-Apps root is reproducible and rollbacks are possible; update both root-dev.yaml and root-prod.yaml to reference the chosen tag or SHA and ensure your deployment/CI pipeline updates that field when promoting changes (or document the promotion step).k8s-argocd/applications/dev/platform/external-secret.yaml (1)
35-35:targetRevision: main은 특정 커밋/태그 대신 브랜치를 추적하므로 GitOps 재현성과 감사 추적성을 해칩니다.커밋 SHA 또는 릴리스 태그로 고정하면 배포 재현성, 롤백, 감시가 향상됩니다. 이 패턴이 전체 k8s-argocd/applications 디렉토리의 37개 파일에서 반복되고 있으므로 일괄 수정을 권장합니다.
제안 변경안
- targetRevision: main + targetRevision: <commit-sha-or-release-tag>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s-argocd/applications/dev/platform/external-secret.yaml` at line 35, The manifest currently sets targetRevision: main which pins ArgoCD to a moving branch; change targetRevision in external-secret.yaml to a specific immutable identifier (a commit SHA or a release tag) instead of "main" to ensure reproducible, auditable deployments; apply the same replacement for all other occurrences in the k8s-argocd/applications directory (the ~37 files) and ensure your CI/CD process or deployment script updates these targetRevision values to the desired SHA/tag when promoting releases.terraform/environments/dev/variables.tf (1)
315-325: NodePort 변수에 범위 검증(30000~32767)을 추가하세요.현재는 잘못된 값이 들어와도 plan 단계에서 즉시 차단되지 않습니다. 배포 실패를 앞단에서 막는 검증이 있으면 운영 안정성이 좋아집니다.
검증 블록 추가 예시
variable "nginx_gateway_http_node_port" { description = "NGINX Gateway Fabric HTTP NodePort 포트입니다." type = number default = 30080 + validation { + condition = var.nginx_gateway_http_node_port >= 30000 && var.nginx_gateway_http_node_port <= 32767 + error_message = "nginx_gateway_http_node_port는 30000~32767 범위여야 합니다." + } } variable "nginx_gateway_https_node_port" { description = "NGINX Gateway Fabric HTTPS NodePort 포트입니다." type = number default = 30443 + validation { + condition = var.nginx_gateway_https_node_port >= 30000 && var.nginx_gateway_https_node_port <= 32767 + error_message = "nginx_gateway_https_node_port는 30000~32767 범위여야 합니다." + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/environments/dev/variables.tf` around lines 315 - 325, Add a validation block to both variable "nginx_gateway_http_node_port" and "nginx_gateway_https_node_port" to enforce the NodePort range 30000..32767 so invalid values fail during plan; implement Terraform variable validation (e.g., validate that each value is >= 30000 and <= 32767) and provide a clear error message like "nginx Gateway NodePort must be between 30000 and 32767" to be shown when validation fails.terraform/environments/dev/scripts/k8s-worker-init.sh (1)
59-67: Kubernetes 저장소 버전을 단일 변수로 중앙화하여 버전 드리프트 위험을 줄이세요.
v1.35가 모든 k8s 부트스트랩 스크립트(dev/prod, master/worker)의 2개 위치에서 하드코딩되어 있습니다. 4개 파일을 수동으로 동기화해야 하므로 버전 업그레이드 시 일관성 문제가 발생할 수 있습니다. 각 스크립트 상단에서 변수로 정의하면 유지보수성을 높일 수 있습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/environments/dev/scripts/k8s-worker-init.sh` around lines 59 - 67, The script hardcodes the Kubernetes version string "v1.35" in the curl and echo lines (the Release.key download and the sources.list deb entry); define a single variable (e.g., K8S_VERSION) at the top of the script and replace the hardcoded "v1.35" occurrences in the curl command and the echo deb URL with that variable, and do the same refactor across the other bootstrap scripts (dev/prod, master/worker) so all scripts source the same version variable to avoid drift.terraform/environments/dev/outputs.tf (1)
95-103: 중복된 output 정의
k8s_worker_instance_group_id와instance_group_id가 동일한 값(module.k8s_worker_nodes.instance_group_id)을 출력합니다. 의도적인 별칭이 아니라면 중복을 제거하거나, 별칭이라면 description에 명시해 주세요.♻️ 중복 output 제거 제안
output "k8s_worker_instance_group_id" { description = "생성된 Kubernetes 워커 인스턴스 그룹 ID입니다." value = module.k8s_worker_nodes.instance_group_id } - -output "instance_group_id" { - description = "생성된 Kubernetes 워커 인스턴스 그룹 ID입니다." - value = module.k8s_worker_nodes.instance_group_id -}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/environments/dev/outputs.tf` around lines 95 - 103, Two outputs, k8s_worker_instance_group_id and instance_group_id, both emit the same value module.k8s_worker_nodes.instance_group_id; remove the duplicate or make it explicit as an alias. Either delete one of the output blocks (prefer keeping the more descriptive k8s_worker_instance_group_id) or, if you need both, update the description of instance_group_id to state it is an intentional alias of k8s_worker_instance_group_id and why it exists so reviewers understand the duplication.terraform/environments/dev/scripts/k8s-master-init.sh (2)
161-164: Calico 설치 스크립트에서kubectl create대신kubectl apply사용 권장
kubectl create는 리소스가 이미 존재할 경우 실패합니다. 재실행 가능성(idempotency)을 위해kubectl apply를 사용하는 것이 좋습니다.♻️ idempotent한 설치를 위한 수정 제안
-kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/${calico_version}/manifests/operator-crds.yaml -kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/${calico_version}/manifests/tigera-operator.yaml -kubectl create -f /root/calico-custom-resources.yaml +kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/${calico_version}/manifests/operator-crds.yaml +kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/${calico_version}/manifests/tigera-operator.yaml +kubectl apply -f /root/calico-custom-resources.yaml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/environments/dev/scripts/k8s-master-init.sh` around lines 161 - 164, The Calico installation uses non-idempotent "kubectl create" for the three manifests (operator-crds.yaml, tigera-operator.yaml, and /root/calico-custom-resources.yaml); change those kubectl create invocations to "kubectl apply -f <manifest>" so the script becomes re-runnable and tolerant of existing resources (replace the three lines that reference operator-crds.yaml, tigera-operator.yaml, and /root/calico-custom-resources.yaml accordingly).
81-88: JSON 파싱에jq대신sed사용 - 취약한 접근 방식
sed를 사용한 JSON 파싱은 edge case(예: 토큰 내 특수 문자, 멀티라인 응답)에서 실패할 수 있습니다. 스크립트 앞부분에서 이미jq를 설치하지 않았으므로, 의존성 추가를 고려하거나 현재 방식의 한계를 인지해야 합니다.♻️ jq를 사용한 안정적인 JSON 파싱 제안
Line 19에서
jq설치 추가:-apt-get install -y apt-transport-https ca-certificates curl gpg containerd +apt-get install -y apt-transport-https ca-certificates curl gpg containerd jqLine 83에서
jq사용:-access_token="$(printf '%s' "$${token_response}" | sed -n 's/.*"access_token"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p')" +access_token="$(printf '%s' "$${token_response}" | jq -r '.access_token')"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/environments/dev/scripts/k8s-master-init.sh` around lines 81 - 88, The script uses sed to parse JSON from token_response into access_token, which is brittle; install jq at the top of the script (ensure package installation/apt-get or add to bootstrap steps) and replace the sed extraction with a jq-based extraction reading token_response and assigning access_token (use jq -r '.access_token' on the captured JSON). Update the token_response/access_token handling in k8s-master-init.sh so it fails cleanly if jq is missing or the .access_token field is empty, preserving the existing error message behavior.terraform/environments/dev/compute.tf (1)
42-44:cloud-platformOAuth 스코프 - 최소 권한 원칙 검토
https://www.googleapis.com/auth/cloud-platform스코프는 GCP의 모든 API에 대한 전체 액세스를 부여합니다. 개발 환경에서는 수용 가능하지만, 프로덕션에서는 실제로 필요한 스코프(예:storage-ro,logging-write,monitoring)만 지정하는 것이 보안 모범 사례입니다.Also applies to: 97-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@terraform/environments/dev/compute.tf` around lines 42 - 44, The service_account_scopes currently grants full GCP access via "https://www.googleapis.com/auth/cloud-platform"; update the service_account_scopes array in compute.tf (symbol: service_account_scopes) to use least-privilege scopes for production (e.g., storage-ro, logging-write, monitoring) and reserve cloud-platform only for dev/test, and apply the same change to the other occurrences noted (the repeated service_account_scopes block referenced in the comment). Ensure each environment's compute.tf defines an explicit, minimal set of OAuth scopes corresponding to the services the VM needs rather than the global cloud-platform scope.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/terraform-apply.yml:
- Around line 7-14: The workflow removed the staging target option while the
repo no longer has terraform/environments/staging; confirm whether staging
should be supported and if so restore the staging environment folder and all
workflow logic: add "staging" back to the workflow input choices under the
"target" input (options list), and reintroduce any staging-specific
branches/steps that reference terraform/environments/staging; if staging is
intentionally removed, reply confirming deletion and ensure no remaining code
references "staging" in the workflow or terraform paths so the workflow input
options and any conditional logic (the "target" input and related steps) are
consistent with only "dev" and "prod".
- Around line 150-168: Multiple echo commands append separately to
$GITHUB_STEP_SUMMARY; group them into a single redirected block and quote
variables/paths to avoid word-splitting and globbing. Replace the series of echo
and cat >> $GITHUB_STEP_SUMMARY with a grouped block using { ... } >>
"$GITHUB_STEP_SUMMARY", and quote "$GITHUB_STEP_SUMMARY" and the path
"terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt";
keep the inner logic (the if check for the apply_output.txt file, the
details/summary wrapper, and the code fence) unchanged but remove per-line
redirections.
In `@k8s-argocd/applications/dev/app/backend.yaml`:
- Around line 25-28: Annotations notifications.argoproj.io/subscribe.* currently
reference the stale suffix "backend-nonprod"; update each subscription key
(notifications.argoproj.io/subscribe.on-sync-running.backend-nonprod,
notifications.argoproj.io/subscribe.on-deployed.backend-nonprod,
notifications.argoproj.io/subscribe.on-sync-failed.backend-nonprod,
notifications.argoproj.io/subscribe.on-health-degraded.backend-nonprod) to use
"backend-dev" so the annotation keys match the dev naming convention in this PR.
- Around line 18-21: The argocd-image-updater annotations use an alias
("backend") but argocd-image-updater.argoproj.io/image-list currently lacks the
alias mapping; update the image-list annotation to include the alias mapping for
backend (e.g., add
"backend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server")
so the backend.update-strategy, backend.allow-tags and
backend.kustomize.image-name annotations apply correctly to that image alias.
In `@k8s-argocd/applications/dev/app/frontend.yaml`:
- Around line 25-27: The notification annotation keys use the inconsistent
suffix "frontend-nonprod"
(notifications.argoproj.io/subscribe.on-deployed.frontend-nonprod,
notifications.argoproj.io/subscribe.on-sync-failed.frontend-nonprod,
notifications.argoproj.io/subscribe.on-health-degraded.frontend-nonprod); update
those keys to use "frontend-dev" to match the dev environment naming (i.e.,
replace the ".frontend-nonprod" suffix with ".frontend-dev") so the notification
hooks align with the rest of the dev environment.
- Around line 18-21: image-list에 이미지 alias가 빠져서 per-image 설정(backend keys
frontend.update-strategy, frontend.allow-tags, frontend.kustomize.image-name)이
적용되지 않습니다; argocd-image-updater.argoproj.io/image-list 값을
"frontend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web"
형태로 수정하여 alias frontend를 명시하고, 사용된 식별자(frontend.update-strategy,
frontend.allow-tags, frontend.kustomize.image-name)와 일치시키세요.
In `@k8s-argocd/applications/dev/app/image-updater.yaml`:
- Around line 5-7: The manifest uses the ImageUpdater CRD (kind: ImageUpdater,
apiVersion: argocd-image-updater.argoproj.io/v1alpha1) but the repo lacks the
CRD or installation guidance; add documentation and one of the following fixes:
include the CRD in the Helm chart values/templates so it’s installed with
deployments, add a standalone CRD/installation manifest under deploy/ or
k8s/crds with clear apply instructions, or add an INSTALL.md entry that links to
the official Argo CD Image Updater CRD install docs and explains which option to
use; reference the ImageUpdater manifest when describing the required CRD.
In `@k8s-argocd/applications/dev/platform/external-secret.yaml`:
- Around line 49-51: The manifest currently sets mutually conflicting apply
options in external-secret.yaml by including both ServerSideApply=true and
Replace=true; remove the Replace=true entry and keep ServerSideApply=true (and
CreateNamespace=true) so Argo CD uses server-side apply only—locate the list
containing CreateNamespace, ServerSideApply, and Replace and delete the
Replace=true line to eliminate Replace behavior and avoid full resource
replacement/down-time.
In `@k8s-argocd/applications/dev/platform/storageclass.yaml`:
- Around line 16-17: The annotation argocd.argoproj.io/sync-wave: "0" does not
enforce StorageClass to be applied before PVC-using workloads; update the
annotation value on the StorageClass resource (argocd.argoproj.io/sync-wave) to
a lower wave (e.g., "-1") so it runs earlier than PVCs, and ensure any PVC or
dependent resources use a higher wave than "-1" to preserve ordering.
In `@k8s-helm/releases/argocd/values-dev.yaml`:
- Around line 11-18: Add explicit NetworkPolicy resources for the argocd
namespace: create a default-deny-all NetworkPolicy for the argocd namespace and
a second NetworkPolicy that limits ingress to the argocd-server pods (select by
the argocd-server pod label) only from the nginx-gateway namespace (or from the
specific gateway pod label), while still allowing required traffic (ArgoCD
server port and cluster-internal health/probe endpoints); update the
network-policies template to include these two policies so argocd is covered by
the same namespace loop that generates default-deny-all and per-pod allow rules
for other app namespaces.
In `@k8s-helm/releases/metrics-server/values-dev.yaml`:
- Around line 7-13: The --kubelet-insecure-tls flag is present in the
defaultArgs list and must be removed from all environment values files (e.g.,
values-dev.yaml, values-nonprod.yaml, values-prod.yaml); edit the defaultArgs
arrays to delete the --kubelet-insecure-tls entry, ensure any Helm chart
templates (the defaultArgs injection) do not reintroduce this flag, and document
that Kubelet serving certificates/trust chain must be properly configured before
deploying without this flag.
In `@k8s-helm/releases/monitoring-core/values-dev-gitops.yaml`:
- Line 21: The value searchNamespace: ALL grants unnecessary cluster-wide
access; change searchNamespace from ALL to monitoring to limit the Grafana
sidecar's ConfigMap search to the monitoring namespace (where the dashboard
ConfigMap actually lives), and then update the Grafana sidecar RBAC
configuration by replacing the ClusterRole/ClusterRoleBinding allowance for
ConfigMap reads with a namespaced Role and RoleBinding scoped to the monitoring
namespace so the sidecar uses Role (not ClusterRole) for ConfigMap
get/list/watch on the monitoring namespace.
In `@k8s-kustomize/overlays/dev/backend/deployment.yaml`:
- Around line 17-19: The backend container "pinhouse-be" is missing a
securityContext; add an explicit securityContext block under the container spec
for pinhouse-be (and mirror to other overlays) to enforce safe defaults—e.g.,
runAsNonRoot: true, runAsUser: 1000, allowPrivilegeEscalation: false,
readOnlyRootFilesystem: true and required FS/group settings as appropriate; also
add a podSecurityContext at the pod spec level (if not already present) for
supplemental defaults like runAsUser/runAsGroup and seccompProfile/runtimeClass
where your cluster requires them. Ensure the securityContext is added to the
container entry for the "pinhouse-be" container in the Deployment manifest and
replicate the same change across base/dev/prod overlays.
In `@k8s-kustomize/overlays/dev/frontend/deployment.yaml`:
- Around line 17-19: Add a container-level securityContext for the container
named "pinhouse-fe" in the overlay so the pod cannot run as root or escalate
privileges; set runAsNonRoot: true, runAsUser to a non-root UID (e.g., 1000),
set allowPrivilegeEscalation: false, add securityContext.readOnlyRootFilesystem:
true, drop all Linux capabilities (capabilities.drop: ["ALL"]), and enable
seccompProfile (type: RuntimeDefault); update the container spec under the
"pinhouse-fe" entry so these fields are present in the overlay deployment to
enforce minimum security defaults even when base is missing them.
In `@terraform/environments/dev/scripts/k8s-worker-init.sh`:
- Around line 81-84: The current sed-based extraction of access_token from
token_response is fragile; change the access token parsing to use a proper JSON
parser instead of sed: replace the access_token assignment that uses sed with a
jq-based extraction (e.g., parse token_response with jq -r '.access_token') or a
small python3 one-liner that loads JSON and prints ["access_token"]; ensure the
new command reads from the token_response variable (or pipes curl directly) and
fails fast if the key is missing so kubelet image pull auth won’t get an empty
token.
In `@terraform/environments/dev/storage.tf`:
- Around line 19-23: 현재 storage 모듈이 static_assets.name 등에서
"${var.project}-${var.environment}"을 GCS 버킷 이름으로 바로 사용해 전역 충돌 위험이 있으니, 버킷 이름 생성
로직(예: static_assets.name 및 다른 버킷 name 필드)에서 반드시 고유 식별자를 추가하도록 변경하세요; 예를 들어
var.project_id 또는 data.google_project.<...>.project_id 를 접두사나 접미사로 결합(또는 고정
suffix like "-bucket")하여 "${var.project}-${var.environment}-${var.project_id}"처럼
전역 고유성을 보장하고, 생성 규칙에서 허용되지 않는 문자(대문자/언더스코어 등)가 있으면 소문자/하이픈으로 정규화하도록 업데이트하세요.
In `@terraform/environments/dev/terraform.tfvars.example`:
- Around line 4-5: Update the terraform example variables so they don't point to
production: replace the prod placeholder value for project_id with a dev-safe
placeholder (e.g., "your-dev-gcp-project-id") and adjust the repository
description/placeholder for project (the `project` and `project_id` entries) to
clearly indicate this file is for the dev environment; ensure the same change is
applied to the other occurrences referenced (the other example blocks around the
30-45 range) so no example terraform.tfvars still references production
identifiers.
- Line 95: The line "ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다." is not a valid
HCL comment and breaks terraform.tfvars.example parsing; fix it by prefixing
that exact line with a hash character (#) so it becomes an HCL comment (i.e.,
add "#" at the start of the line containing that Korean sentence) to restore
valid HCL2 syntax.
In `@terraform/environments/dev/versions.tf`:
- Line 5: The Terraform minimum required_version is too low for the optional()
type used in the artifact_registry_repositories variable; update the
required_version declaration in versions.tf from ">= 1.0" to at least ">= 1.3.0"
so Terraform supports optional(string), optional(bool), and
optional(map(string)) used in the variable definition (search for
artifact_registry_repositories and optional(...) usages to locate the impacted
code).
---
Outside diff comments:
In `@k8s-argocd/root-apps/root-prod.yaml`:
- Around line 31-33: 주석과 설정값이 모순됩니다: in the directory block the property recurse
is set to false but the comment says "재귀적으로 하위 디렉토리도 포함". Update the comment to
match the actual setting (recurse: false) — e.g., change the comment to indicate
only top-level apps are included or note that each top-level Application manages
its own subdirectories; this touches the directory -> recurse setting in
root-prod.yaml.
---
Nitpick comments:
In `@k8s-argocd/applications/dev/app.yaml`:
- Around line 20-26: The app-root-dev manifest uses an unfixed revision and the
default project; update the app definition for app-root-dev so it matches the
root-app policy: change the project (the project: field) from "default" to the
dedicated root application project used for controllers (i.e., the same project
name used by other root apps) and pin source.targetRevision from "main" to an
immutable reference (a release tag or specific commit SHA) so the app uses a
fixed revision; verify the change in the block containing source.repoURL and
path: k8s-argocd/applications/dev/app.
In `@k8s-argocd/applications/dev/platform.yaml`:
- Around line 20-26: 현재 manifest uses project: default and
source.targetRevision: main which is brittle; create a dedicated ArgoCD
AppProject (e.g., dev-platform) and update the Application to reference project:
dev-platform, and replace targetRevision: main with a fixed git tag or commit
SHA (or annotated release ref) to ensure immutability; also ensure the new
AppProject (dev-platform) is defined with tightened
RBAC/allowedRepositories/namespace/resourceRestrictions to scope permissions for
the Application and update any CI/CD docs that deploy this Application to use
the new tag/sha and project name.
In `@k8s-argocd/applications/dev/platform/external-secret.yaml`:
- Line 35: The manifest currently sets targetRevision: main which pins ArgoCD to
a moving branch; change targetRevision in external-secret.yaml to a specific
immutable identifier (a commit SHA or a release tag) instead of "main" to ensure
reproducible, auditable deployments; apply the same replacement for all other
occurrences in the k8s-argocd/applications directory (the ~37 files) and ensure
your CI/CD process or deployment script updates these targetRevision values to
the desired SHA/tag when promoting releases.
In `@k8s-argocd/root-apps/root-dev.yaml`:
- Around line 22-24: The root ArgoCD applications (root-dev.yaml and
root-prod.yaml) currently use a moving branch via targetRevision: main; change
targetRevision to a fixed immutable ref (a release tag or specific commit SHA
from the repoURL https://github.com/PinHouse/PinHouse_CLOUD) so the App-of-Apps
root is reproducible and rollbacks are possible; update both root-dev.yaml and
root-prod.yaml to reference the chosen tag or SHA and ensure your deployment/CI
pipeline updates that field when promoting changes (or document the promotion
step).
In `@terraform/environments/dev/compute.tf`:
- Around line 42-44: The service_account_scopes currently grants full GCP access
via "https://www.googleapis.com/auth/cloud-platform"; update the
service_account_scopes array in compute.tf (symbol: service_account_scopes) to
use least-privilege scopes for production (e.g., storage-ro, logging-write,
monitoring) and reserve cloud-platform only for dev/test, and apply the same
change to the other occurrences noted (the repeated service_account_scopes block
referenced in the comment). Ensure each environment's compute.tf defines an
explicit, minimal set of OAuth scopes corresponding to the services the VM needs
rather than the global cloud-platform scope.
In `@terraform/environments/dev/outputs.tf`:
- Around line 95-103: Two outputs, k8s_worker_instance_group_id and
instance_group_id, both emit the same value
module.k8s_worker_nodes.instance_group_id; remove the duplicate or make it
explicit as an alias. Either delete one of the output blocks (prefer keeping the
more descriptive k8s_worker_instance_group_id) or, if you need both, update the
description of instance_group_id to state it is an intentional alias of
k8s_worker_instance_group_id and why it exists so reviewers understand the
duplication.
In `@terraform/environments/dev/scripts/k8s-master-init.sh`:
- Around line 161-164: The Calico installation uses non-idempotent "kubectl
create" for the three manifests (operator-crds.yaml, tigera-operator.yaml, and
/root/calico-custom-resources.yaml); change those kubectl create invocations to
"kubectl apply -f <manifest>" so the script becomes re-runnable and tolerant of
existing resources (replace the three lines that reference operator-crds.yaml,
tigera-operator.yaml, and /root/calico-custom-resources.yaml accordingly).
- Around line 81-88: The script uses sed to parse JSON from token_response into
access_token, which is brittle; install jq at the top of the script (ensure
package installation/apt-get or add to bootstrap steps) and replace the sed
extraction with a jq-based extraction reading token_response and assigning
access_token (use jq -r '.access_token' on the captured JSON). Update the
token_response/access_token handling in k8s-master-init.sh so it fails cleanly
if jq is missing or the .access_token field is empty, preserving the existing
error message behavior.
In `@terraform/environments/dev/scripts/k8s-worker-init.sh`:
- Around line 59-67: The script hardcodes the Kubernetes version string "v1.35"
in the curl and echo lines (the Release.key download and the sources.list deb
entry); define a single variable (e.g., K8S_VERSION) at the top of the script
and replace the hardcoded "v1.35" occurrences in the curl command and the echo
deb URL with that variable, and do the same refactor across the other bootstrap
scripts (dev/prod, master/worker) so all scripts source the same version
variable to avoid drift.
In `@terraform/environments/dev/variables.tf`:
- Around line 315-325: Add a validation block to both variable
"nginx_gateway_http_node_port" and "nginx_gateway_https_node_port" to enforce
the NodePort range 30000..32767 so invalid values fail during plan; implement
Terraform variable validation (e.g., validate that each value is >= 30000 and <=
32767) and provide a clear error message like "nginx Gateway NodePort must be
between 30000 and 32767" to be shown when validation fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f93eebfe-2b10-4a32-a3f2-e96d39727b54
📒 Files selected for processing (65)
.github/workflows/terraform-apply.yml.github/workflows/terraform-plan.ymlk8s-argocd/applications/dev/app.yamlk8s-argocd/applications/dev/app/backend.yamlk8s-argocd/applications/dev/app/frontend.yamlk8s-argocd/applications/dev/app/image-updater.yamlk8s-argocd/applications/dev/monitoring.yamlk8s-argocd/applications/dev/monitoring/monitoring-alloy.yamlk8s-argocd/applications/dev/monitoring/monitoring-core.yamlk8s-argocd/applications/dev/monitoring/monitoring-loki.yamlk8s-argocd/applications/dev/monitoring/monitoring-tempo.yamlk8s-argocd/applications/dev/platform.yamlk8s-argocd/applications/dev/platform/argocd-config.yamlk8s-argocd/applications/dev/platform/cert-manager.yamlk8s-argocd/applications/dev/platform/external-secret.yamlk8s-argocd/applications/dev/platform/gateway-api.yamlk8s-argocd/applications/dev/platform/gce-pd-csi-driver.yamlk8s-argocd/applications/dev/platform/metrics-server.yamlk8s-argocd/applications/dev/platform/monitoring-httproute.yamlk8s-argocd/applications/dev/platform/nginx-gateway-fabric.yamlk8s-argocd/applications/dev/platform/platform-resources.yamlk8s-argocd/applications/dev/platform/storageclass.yamlk8s-argocd/root-apps/root-dev.yamlk8s-argocd/root-apps/root-prod.yamlk8s-helm/.gitignorek8s-helm/platform-chart/values-dev.yamlk8s-helm/releases/argocd/values-dev.yamlk8s-helm/releases/calico/values-dev.yamlk8s-helm/releases/cert-manager/values-dev.yamlk8s-helm/releases/external-secret/values-dev.yamlk8s-helm/releases/metrics-server/values-dev.yamlk8s-helm/releases/monitoring-alloy/values-dev-gitops.yamlk8s-helm/releases/monitoring-core/values-dev-gitops.yamlk8s-helm/releases/monitoring-loki/values-dev-gitops.yamlk8s-helm/releases/monitoring-tempo/values-dev-gitops.yamlk8s-helm/releases/nginx-gateway-fabric/values-dev.yamlk8s-kustomize/overlays/dev/backend/deployment.yamlk8s-kustomize/overlays/dev/backend/httproute.yamlk8s-kustomize/overlays/dev/backend/kustomization.yamlk8s-kustomize/overlays/dev/frontend/deployment.yamlk8s-kustomize/overlays/dev/frontend/httproute.yamlk8s-kustomize/overlays/dev/frontend/kustomization.yamlk8s-kustomize/overlays/dev/kustomization.yamlk8s-kustomize/platform/argocd/overlays/dev/httproute.yamlk8s-kustomize/platform/argocd/overlays/dev/kustomization.yamlk8s-kustomize/platform/argocd/overlays/dev/notifications-cm-context-patch.yamlk8s-kustomize/platform/monitoring/overlays/dev/httproute.yamlk8s-kustomize/platform/monitoring/overlays/dev/kustomization.yamlterraform/environments/dev/artifact-registry.tfterraform/environments/dev/backend.tfterraform/environments/dev/compute.tfterraform/environments/dev/firewall.tfterraform/environments/dev/iap.tfterraform/environments/dev/load-balancer.tfterraform/environments/dev/outputs.tfterraform/environments/dev/private-google-access.tfterraform/environments/dev/provider.tfterraform/environments/dev/scripts/k8s-master-init.shterraform/environments/dev/scripts/k8s-worker-init.shterraform/environments/dev/secret-manager.tfterraform/environments/dev/storage.tfterraform/environments/dev/terraform.tfvars.exampleterraform/environments/dev/variables.tfterraform/environments/dev/versions.tfterraform/environments/dev/vpc.tf
| target: | ||
| description: "적용할 대상을 선택합니다. all을 선택하면 dev와 prod를 병렬로 적용합니다." | ||
| required: true | ||
| type: choice | ||
| options: | ||
| - dev | ||
| - staging | ||
| - prod | ||
| - all |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "terraform/environments 아래 환경 디렉터리:"
find terraform/environments -mindepth 1 -maxdepth 1 -type d | sort
echo
echo "terraform-apply.yml의 대상 옵션과 환경 매핑:"
sed -n '1,70p' .github/workflows/terraform-apply.yml
echo
echo "staging 관련 참조:"
rg -n -C2 'staging|PinHouse_staging' .github/workflows/terraform-apply.ymlRepository: PinHouse/PinHouse_CLOUD
Length of output: 2242
staging 환경이 완전히 제거되었습니다. PR에서 이를 의도한 것인지 확인이 필요합니다.
terraform/environments/디렉터리에 staging이 없음 (dev, prod만 존재)- 워크플로우 입력 옵션에 staging 선택지 없음
- 워크플로우 전체에서 staging 참조 없음
staging이 계속 지원되어야 할 환경이라면, 디렉터리와 워크플로우 로직을 모두 복원해야 합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/terraform-apply.yml around lines 7 - 14, The workflow
removed the staging target option while the repo no longer has
terraform/environments/staging; confirm whether staging should be supported and
if so restore the staging environment folder and all workflow logic: add
"staging" back to the workflow input choices under the "target" input (options
list), and reintroduce any staging-specific branches/steps that reference
terraform/environments/staging; if staging is intentionally removed, reply
confirming deletion and ensure no remaining code references "staging" in the
workflow or terraform paths so the workflow input options and any conditional
logic (the "target" input and related steps) are consistent with only "dev" and
"prod".
| - name: 배포 요약 생성 | ||
| if: always() | ||
| run: | | ||
| echo "## Terraform Apply 요약 - ${{ github.event.inputs.environment }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "## Terraform Apply 요약 - ${{ matrix.terraform_environment }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "" >> $GITHUB_STEP_SUMMARY | ||
| echo "**환경:** ${{ github.event.inputs.environment }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "**환경:** ${{ matrix.terraform_environment }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "**GitHub Environment:** ${{ matrix.github_environment }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "**실행 결과:** ${{ steps.apply.outcome }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "**실행 사용자:** @${{ github.actor }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "**커밋 SHA:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "" >> $GITHUB_STEP_SUMMARY | ||
|
|
||
| if [ -f terraform/environments/${{ github.event.inputs.environment }}/apply_output.txt ]; then | ||
| if [ -f terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt ]; then | ||
| echo "<details><summary>Apply 출력 보기</summary>" >> $GITHUB_STEP_SUMMARY | ||
| echo "" >> $GITHUB_STEP_SUMMARY | ||
| echo '```' >> $GITHUB_STEP_SUMMARY | ||
| cat terraform/environments/${{ github.event.inputs.environment }}/apply_output.txt >> $GITHUB_STEP_SUMMARY | ||
| cat terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt >> $GITHUB_STEP_SUMMARY | ||
| echo '```' >> $GITHUB_STEP_SUMMARY | ||
| echo "</details>" >> $GITHUB_STEP_SUMMARY |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "terraform-apply.yml" -o -name "*terraform*apply*" | grep -E "\.(yml|yaml)$" | head -20Repository: PinHouse/PinHouse_CLOUD
Length of output: 106
🏁 Script executed:
if [ -f ".github/workflows/terraform-apply.yml" ]; then
echo "File found"
wc -l ".github/workflows/terraform-apply.yml"
else
find . -type f -name "*terraform*apply*"
fiRepository: PinHouse/PinHouse_CLOUD
Length of output: 119
🏁 Script executed:
sed -n '150,168p' ".github/workflows/terraform-apply.yml"Repository: PinHouse/PinHouse_CLOUD
Length of output: 1213
🏁 Script executed:
# shellcheck 경고 코드 검증을 위해 web 검색
# SC2086과 SC2129 정의 확인Repository: PinHouse/PinHouse_CLOUD
Length of output: 49
🌐 Web query:
shellcheck SC2086 SC2129
💡 Result:
ShellCheck warnings SC2086 and SC2129 are from the official ShellCheck wiki. SC2086: Double quote to prevent globbing and word splitting. It warns when variables or parameters are unquoted in command contexts, like echo
Citations:
- 1: https://www.shellcheck.net/wiki/SC2086
- 2: https://github.com/koalaman/shellcheck/wiki/Sc2086
- 3: https://www.shellcheck.net/wiki/SC2129
- 4: https://github.com/koalaman/shellcheck/wiki/SC2129
$GITHUB_STEP_SUMMARY 리다이렉션을 정리해주세요.
이 블록은 여러 echo 명령이 동일한 파일로 개별 리다이렉션되고 있습니다. 괄호로 그룹화하여 파일을 한 번만 열도록 하고, 경로와 변수를 인용하면 더 안전합니다. $GITHUB_STEP_SUMMARY 변수와 terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt 경로를 인용하세요.
수정 예시
- echo "## Terraform Apply 요약 - ${{ matrix.terraform_environment }}" >> $GITHUB_STEP_SUMMARY
- echo "" >> $GITHUB_STEP_SUMMARY
- echo "**환경:** ${{ matrix.terraform_environment }}" >> $GITHUB_STEP_SUMMARY
- echo "**GitHub Environment:** ${{ matrix.github_environment }}" >> $GITHUB_STEP_SUMMARY
- echo "**실행 결과:** ${{ steps.apply.outcome }}" >> $GITHUB_STEP_SUMMARY
- echo "**실행 사용자:** @${{ github.actor }}" >> $GITHUB_STEP_SUMMARY
- echo "**커밋 SHA:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
- echo "" >> $GITHUB_STEP_SUMMARY
-
- if [ -f terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt ]; then
- echo "<details><summary>Apply 출력 보기</summary>" >> $GITHUB_STEP_SUMMARY
- echo "" >> $GITHUB_STEP_SUMMARY
- echo '```' >> $GITHUB_STEP_SUMMARY
- cat terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt >> $GITHUB_STEP_SUMMARY
- echo '```' >> $GITHUB_STEP_SUMMARY
- echo "</details>" >> $GITHUB_STEP_SUMMARY
- fi
+ {
+ echo "## Terraform Apply 요약 - ${{ matrix.terraform_environment }}"
+ echo ""
+ echo "**환경:** ${{ matrix.terraform_environment }}"
+ echo "**GitHub Environment:** ${{ matrix.github_environment }}"
+ echo "**실행 결과:** ${{ steps.apply.outcome }}"
+ echo "**실행 사용자:** @${{ github.actor }}"
+ echo "**커밋 SHA:** ${{ github.sha }}"
+ echo ""
+
+ if [ -f "terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt" ]; then
+ echo "<details><summary>Apply 출력 보기</summary>"
+ echo ""
+ echo '```'
+ cat "terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt"
+ echo '```'
+ echo "</details>"
+ fi
+ } >> "$GITHUB_STEP_SUMMARY"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: 배포 요약 생성 | |
| if: always() | |
| run: | | |
| echo "## Terraform Apply 요약 - ${{ github.event.inputs.environment }}" >> $GITHUB_STEP_SUMMARY | |
| echo "## Terraform Apply 요약 - ${{ matrix.terraform_environment }}" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| echo "**환경:** ${{ github.event.inputs.environment }}" >> $GITHUB_STEP_SUMMARY | |
| echo "**환경:** ${{ matrix.terraform_environment }}" >> $GITHUB_STEP_SUMMARY | |
| echo "**GitHub Environment:** ${{ matrix.github_environment }}" >> $GITHUB_STEP_SUMMARY | |
| echo "**실행 결과:** ${{ steps.apply.outcome }}" >> $GITHUB_STEP_SUMMARY | |
| echo "**실행 사용자:** @${{ github.actor }}" >> $GITHUB_STEP_SUMMARY | |
| echo "**커밋 SHA:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| if [ -f terraform/environments/${{ github.event.inputs.environment }}/apply_output.txt ]; then | |
| if [ -f terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt ]; then | |
| echo "<details><summary>Apply 출력 보기</summary>" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| echo '```' >> $GITHUB_STEP_SUMMARY | |
| cat terraform/environments/${{ github.event.inputs.environment }}/apply_output.txt >> $GITHUB_STEP_SUMMARY | |
| cat terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt >> $GITHUB_STEP_SUMMARY | |
| echo '```' >> $GITHUB_STEP_SUMMARY | |
| echo "</details>" >> $GITHUB_STEP_SUMMARY | |
| - name: 배포 요약 생성 | |
| if: always() | |
| run: | | |
| { | |
| echo "## Terraform Apply 요약 - ${{ matrix.terraform_environment }}" | |
| echo "" | |
| echo "**환경:** ${{ matrix.terraform_environment }}" | |
| echo "**GitHub Environment:** ${{ matrix.github_environment }}" | |
| echo "**실행 결과:** ${{ steps.apply.outcome }}" | |
| echo "**실행 사용자:** @${{ github.actor }}" | |
| echo "**커밋 SHA:** ${{ github.sha }}" | |
| echo "" | |
| if [ -f "terraform/environments/${{ matrix.terraform_environment }}/apply_output.txt" ]; then | |
| echo "<details><summary>Apply 출력 보기</summary>" | |
| echo "" | |
| echo ' |
🧰 Tools
🪛 actionlint (1.7.12)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:11:53: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:12:14: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:13:17: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:14:86: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:15:17: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:16:24: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:1:71: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:2:12: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:3:55: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:4:68: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:5:49: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:6:44: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:7:41: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2086:info:8:12: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2129:style:11:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
[error] 152-152: shellcheck reported issue in this script: SC2129:style:1:1: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/terraform-apply.yml around lines 150 - 168, Multiple echo
commands append separately to $GITHUB_STEP_SUMMARY; group them into a single
redirected block and quote variables/paths to avoid word-splitting and globbing.
Replace the series of echo and cat >> $GITHUB_STEP_SUMMARY with a grouped block
using { ... } >> "$GITHUB_STEP_SUMMARY", and quote "$GITHUB_STEP_SUMMARY" and
the path "terraform/environments/${{ matrix.terraform_environment
}}/apply_output.txt"; keep the inner logic (the if check for the
apply_output.txt file, the details/summary wrapper, and the code fence)
unchanged but remove per-line redirections.
| argocd-image-updater.argoproj.io/image-list: asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server | ||
| argocd-image-updater.argoproj.io/backend.update-strategy: newest-build | ||
| argocd-image-updater.argoproj.io/backend.allow-tags: regexp:^[0-9]{8}_[0-9]{6}-[a-f0-9]{7}$ | ||
| argocd-image-updater.argoproj.io/backend.kustomize.image-name: REPLACE_ME |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Argo CD Image Updater에서 argocd-image-updater.argoproj.io/.update-strategy같은 alias 기반 annotation을 사용할 때image-list를 = 형태로 반드시 선언해야 하는지 공식 문서로 확인해주세요.
💡 Result:
네, Argo CD Image Updater의 레거시 annotation 기반 구성(주로 v0.x 버전)에서 argocd-image-updater.argoproj.io/.update-strategy 같은 alias 기반 annotation을 사용하려면, image-list annotation에 해당 alias를 = 형태로 반드시 선언해야 합니다. 공식 문서에 따르면: - image-list는 [<alias_name>=]<image_path>[:<version_constraint>] 형식으로, alias는 선택적이지만 alias 기반 추가 annotation(예: .update-strategy, .allow-tags 등)을 사용하려면 image-list에서 alias를 명시적으로 할당해야 합니다. (출처: Images configuration docs) - "Assigning an alias name to an image is necessary in these scenarios: ... If you want to set custom options for a given image's update strategy" (출처: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/configuration/images/) - "It is generally recommended to set aliases for each of your images. Many of the features depend on an alias being set, and aliases might become a strict requirement in the future." (동일 출처) alias 없이 image만 지정(예: image-list: some/image)하면 기본 .update-strategy (alias 없는 형태)가 적용되지만, .update-strategy는 alias가 image-list에 정의되어 있어야 참조 가능합니다. v1.x 이후 CRD 기반 구성에서는 annotation을 사용하지 않고 ImageUpdater CR의 images[].alias 필드를 사용합니다.
Citations:
- 1: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/configuration/images/
- 2: https://argocd-image-updater.readthedocs.io/en/stable/configuration/migration/
- 3: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.1/configuration/images/
- 4: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/basics/update-strategies/
- 5: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/examples/
image-list annotation에 alias 매핑을 추가하세요
Lines 19-21에서 backend.update-strategy, backend.allow-tags, backend.kustomize.image-name 등 alias 기반 annotation을 사용 중인데, line 18의 image-list에 backend= 매핑이 없습니다. Argo CD Image Updater 공식 문서에 따르면 alias 기반 annotation을 사용하려면 반드시 image-list에서 alias를 명시적으로 선언해야 합니다. 현재 상태에서는 이 옵션들이 적용되지 않아 자동 이미지 업데이트가 제대로 작동하지 않습니다.
수정 제안
- argocd-image-updater.argoproj.io/image-list: asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server
+ argocd-image-updater.argoproj.io/image-list: backend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| argocd-image-updater.argoproj.io/image-list: asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server | |
| argocd-image-updater.argoproj.io/backend.update-strategy: newest-build | |
| argocd-image-updater.argoproj.io/backend.allow-tags: regexp:^[0-9]{8}_[0-9]{6}-[a-f0-9]{7}$ | |
| argocd-image-updater.argoproj.io/backend.kustomize.image-name: REPLACE_ME | |
| argocd-image-updater.argoproj.io/image-list: backend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server | |
| argocd-image-updater.argoproj.io/backend.update-strategy: newest-build | |
| argocd-image-updater.argoproj.io/backend.allow-tags: regexp:^[0-9]{8}_[0-9]{6}-[a-f0-9]{7}$ | |
| argocd-image-updater.argoproj.io/backend.kustomize.image-name: REPLACE_ME |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k8s-argocd/applications/dev/app/backend.yaml` around lines 18 - 21, The
argocd-image-updater annotations use an alias ("backend") but
argocd-image-updater.argoproj.io/image-list currently lacks the alias mapping;
update the image-list annotation to include the alias mapping for backend (e.g.,
add
"backend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-be/pinhouse-server")
so the backend.update-strategy, backend.allow-tags and
backend.kustomize.image-name annotations apply correctly to that image alias.
| notifications.argoproj.io/subscribe.on-sync-running.backend-nonprod: "" | ||
| notifications.argoproj.io/subscribe.on-deployed.backend-nonprod: "" | ||
| notifications.argoproj.io/subscribe.on-sync-failed.backend-nonprod: "" | ||
| notifications.argoproj.io/subscribe.on-health-degraded.backend-nonprod: "" |
There was a problem hiding this comment.
알림 구독 키에 nonprod 명칭이 남아 있습니다
Line 25~28은 backend-nonprod를 사용하고 있어 이번 PR의 dev 명칭 정리와 불일치합니다. 의도된 잔존값이 아니라면 backend-dev로 통일하는 편이 안전합니다.
제안 수정
- notifications.argoproj.io/subscribe.on-sync-running.backend-nonprod: ""
- notifications.argoproj.io/subscribe.on-deployed.backend-nonprod: ""
- notifications.argoproj.io/subscribe.on-sync-failed.backend-nonprod: ""
- notifications.argoproj.io/subscribe.on-health-degraded.backend-nonprod: ""
+ notifications.argoproj.io/subscribe.on-sync-running.backend-dev: ""
+ notifications.argoproj.io/subscribe.on-deployed.backend-dev: ""
+ notifications.argoproj.io/subscribe.on-sync-failed.backend-dev: ""
+ notifications.argoproj.io/subscribe.on-health-degraded.backend-dev: ""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| notifications.argoproj.io/subscribe.on-sync-running.backend-nonprod: "" | |
| notifications.argoproj.io/subscribe.on-deployed.backend-nonprod: "" | |
| notifications.argoproj.io/subscribe.on-sync-failed.backend-nonprod: "" | |
| notifications.argoproj.io/subscribe.on-health-degraded.backend-nonprod: "" | |
| notifications.argoproj.io/subscribe.on-sync-running.backend-dev: "" | |
| notifications.argoproj.io/subscribe.on-deployed.backend-dev: "" | |
| notifications.argoproj.io/subscribe.on-sync-failed.backend-dev: "" | |
| notifications.argoproj.io/subscribe.on-health-degraded.backend-dev: "" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k8s-argocd/applications/dev/app/backend.yaml` around lines 25 - 28,
Annotations notifications.argoproj.io/subscribe.* currently reference the stale
suffix "backend-nonprod"; update each subscription key
(notifications.argoproj.io/subscribe.on-sync-running.backend-nonprod,
notifications.argoproj.io/subscribe.on-deployed.backend-nonprod,
notifications.argoproj.io/subscribe.on-sync-failed.backend-nonprod,
notifications.argoproj.io/subscribe.on-health-degraded.backend-nonprod) to use
"backend-dev" so the annotation keys match the dev naming convention in this PR.
| argocd-image-updater.argoproj.io/image-list: asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web | ||
| argocd-image-updater.argoproj.io/frontend.update-strategy: newest-build | ||
| argocd-image-updater.argoproj.io/frontend.allow-tags: regexp:^[0-9]{8}_[0-9]{6}-[a-f0-9]{7}$ | ||
| argocd-image-updater.argoproj.io/frontend.kustomize.image-name: REPLACE_ME |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Argo CD Image Updater에서 alias 기반 annotation(frontend.update-strategy등)을 쓸 때image-list의 alias 선언(frontend=)이 필수인지 공식 문서를 확인해주세요.
💡 Result:
네, Argo CD Image Updater에서 alias 기반 annotation(예: argocd-image-updater.argoproj.io/frontend.update-strategy)을 사용할 때 image-list의 alias 선언(예: frontend=)이 필수입니다. 공식 문서(https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/configuration/images/)에 따르면: - image-list annotation 형식: [<alias_name>=]<image_path>[:<version_constraint>] (alias는 선택적) - 하지만 per-image update strategy annotation은 argocd-image-updater.argoproj.io/<image_alias>.update-strategy: 형식으로, <image_alias>를 image-list에서 할당받은 alias를 사용해야 합니다. - "Assigning an alias name to an image is necessary in these scenarios: ... If you want to set custom options for a given image's update strategy..." - "It is generally recommended to set aliases for each of your images. Many of the features depend on an alias being set" alias 없이 image-list에 단순 image_path만 지정하면 기본 semver 전략만 사용 가능하며, alias-specific annotation(frontend.update-strategy)은 alias가 image-list에 선언되어 있어야 작동합니다.
Citations:
- 1: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/configuration/images/
- 2: https://argocd-image-updater.readthedocs.io/en/registry-scanner-release-0.2/basics/update-strategies/
- 3: feat: Support app-wide update-strategy annotations argoproj-labs/argocd-image-updater#338
- 4: https://oneuptime.com/blog/post/2026-02-26-argocd-latest-strategy-image-updates/view
Image Updater alias 선언이 누락되어 업데이트 전략 및 태그 필터가 적용되지 않습니다
Line 18의 image-list에 alias가 선언되지 않았기 때문에, 19~21번 줄의 frontend.* 주석(update-strategy, allow-tags, kustomize.image-name)이 작동하지 않습니다. Argo CD Image Updater 공식 문서에 따르면 per-image 업데이트 전략을 사용할 때 image-list에서 해당 alias를 명시해야 합니다.
필수 수정
- argocd-image-updater.argoproj.io/image-list: asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web
+ argocd-image-updater.argoproj.io/image-list: frontend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| argocd-image-updater.argoproj.io/image-list: asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web | |
| argocd-image-updater.argoproj.io/frontend.update-strategy: newest-build | |
| argocd-image-updater.argoproj.io/frontend.allow-tags: regexp:^[0-9]{8}_[0-9]{6}-[a-f0-9]{7}$ | |
| argocd-image-updater.argoproj.io/frontend.kustomize.image-name: REPLACE_ME | |
| argocd-image-updater.argoproj.io/image-list: frontend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web | |
| argocd-image-updater.argoproj.io/frontend.update-strategy: newest-build | |
| argocd-image-updater.argoproj.io/frontend.allow-tags: regexp:^[0-9]{8}_[0-9]{6}-[a-f0-9]{7}$ | |
| argocd-image-updater.argoproj.io/frontend.kustomize.image-name: REPLACE_ME |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k8s-argocd/applications/dev/app/frontend.yaml` around lines 18 - 21,
image-list에 이미지 alias가 빠져서 per-image 설정(backend keys frontend.update-strategy,
frontend.allow-tags, frontend.kustomize.image-name)이 적용되지 않습니다;
argocd-image-updater.argoproj.io/image-list 값을
"frontend=asia-northeast3-docker.pkg.dev/dev-pinhouse/pinhouse-dev-fe/pinhouse-web"
형태로 수정하여 alias frontend를 명시하고, 사용된 식별자(frontend.update-strategy,
frontend.allow-tags, frontend.kustomize.image-name)와 일치시키세요.
| token_response="$(curl -fsSL -H 'Metadata-Flavor: Google' \ | ||
| http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token)" | ||
| access_token="$(printf '%s' "${token_response}" | sed -n 's/.*"access_token"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p')" | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "k8s-worker-init.sh" -type fRepository: PinHouse/PinHouse_CLOUD
Length of output: 179
🏁 Script executed:
sed -n '75,90p' ./terraform/environments/dev/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 577
🏁 Script executed:
head -30 ./terraform/environments/dev/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 892
🏁 Script executed:
wc -l ./terraform/environments/dev/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 126
🏁 Script executed:
cat -n ./terraform/environments/dev/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 4904
🏁 Script executed:
# Verify token format from Google metadata service
curl -s -H 'Metadata-Flavor: Google' "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token?alt=json" 2>/dev/null | head -c 200 || echo "Note: Token format verification skipped (requires GCP environment)"Repository: PinHouse/PinHouse_CLOUD
Length of output: 49
🏁 Script executed:
# Test sed regex robustness with various JSON scenarios
python3 <<'PYEOF'
import json
import subprocess
# Simulate various Google metadata token response scenarios
test_cases = [
# Standard case
'{"access_token":"ya29.c.KqaBla...","expires_in":3599,"token_type":"Bearer"}',
# With extra spaces
'{"access_token" : "ya29.c.KqaBla...","expires_in":3599}',
# With newlines
'{\n "access_token": "ya29.c.KqaBla...",\n "expires_in": 3599\n}',
# Token with hyphen/underscore (common in real tokens)
'{"access_token":"ya29_c-KqaBla-test_token...","expires_in":3599}',
]
sed_regex = r's/.*"access_token"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p'
for i, test_json in enumerate(test_cases, 1):
print(f"\n=== Test case {i} ===")
print(f"Input: {test_json[:80]}")
try:
# Parse with Python JSON (correct)
correct_token = json.loads(test_json).get("access_token", "")
print(f"Python JSON parse: {correct_token[:40]}")
# Parse with sed (current method)
result = subprocess.run(
f"printf '%s' {repr(test_json)} | sed -n '{sed_regex}'",
shell=True,
capture_output=True,
text=True
)
sed_token = result.stdout.strip()
print(f"Sed parse result: {sed_token[:40] if sed_token else '(EMPTY - FAILED)'}")
if sed_token != correct_token:
print("❌ MISMATCH - sed parsing failed!")
else:
print("✓ Match")
except Exception as e:
print(f"Error: {e}")
PYEOFRepository: PinHouse/PinHouse_CLOUD
Length of output: 776
🏁 Script executed:
# Test more problematic edge cases
python3 <<'PYEOF'
import json
import subprocess
# More edge cases that could break sed
test_cases = [
# Token with forward slashes (common in base64)
'{"access_token":"ya29/c+KqaBla/test/token==","expires_in":3599}',
# Token with backslash escape sequence (unlikely but possible)
'{"access_token":"ya29\\u0041test","expires_in":3599}',
# Very long token
'{"access_token":"' + 'a' * 2000 + '","expires_in":3599}',
]
sed_regex = r's/.*"access_token"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p'
for i, test_json in enumerate(test_cases, 1):
print(f"\n=== Edge case {i} ===")
display = test_json[:80] if len(test_json) < 100 else test_json[:80] + "..."
print(f"Input: {display}")
try:
correct_token = json.loads(test_json).get("access_token", "")
print(f"Python JSON length: {len(correct_token)}")
result = subprocess.run(
f"printf '%s' {repr(test_json)} | sed -n '{sed_regex}'",
shell=True,
capture_output=True,
text=True
)
sed_token = result.stdout.strip()
print(f"Sed result length: {len(sed_token)}")
if sed_token == correct_token:
print("✓ Match")
else:
print(f"❌ MISMATCH")
print(f" Expected first 40 chars: {correct_token[:40]}")
print(f" Got: {sed_token[:40] if sed_token else '(EMPTY)'}")
except Exception as e:
print(f"Error: {e}")
PYEOFRepository: PinHouse/PinHouse_CLOUD
Length of output: 575
🏁 Script executed:
# Check if jq is a viable alternative (lighter than python3)
grep -n "jq" ./terraform/environments/dev/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 49
🏁 Script executed:
# Check package dependencies in the script more carefully
grep -E "(apt-get install|install -y)" ./terraform/environments/dev/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 176
🏁 Script executed:
# Check prod script to see if it has similar code or different approach
sed -n '75,90p' ./terraform/environments/prod/scripts/k8s-worker-init.shRepository: PinHouse/PinHouse_CLOUD
Length of output: 577
sed 기반 JSON 파싱은 인증 토큰 추출 실패 위험이 있습니다.
83번 줄의 sed 정규식은 유니코드 이스케이프 시퀀스(예: \u0041)가 포함된 JSON 응답을 제대로 처리하지 못하며, JSON 구조 변화에도 취약합니다. 이로 인해 kubelet 이미지 풀 인증이 간헐적으로 실패할 수 있습니다.
권장 수정
옵션 1: jq 사용 (권장 - 경량)
-apt-get install -y apt-transport-https ca-certificates curl gpg containerd
+apt-get install -y apt-transport-https ca-certificates curl gpg containerd jq-access_token="$(printf '%s' "${token_response}" | sed -n 's/.*"access_token"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p')"
+access_token="$(printf '%s' "${token_response}" | jq -r '.access_token')"옵션 2: Python3 사용
-apt-get install -y apt-transport-https ca-certificates curl gpg containerd
+apt-get install -y apt-transport-https ca-certificates curl gpg containerd python3-access_token="$(printf '%s' "${token_response}" | sed -n 's/.*"access_token"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/p')"
+access_token="$(printf '%s' "${token_response}" | python3 -c 'import json,sys; print(json.load(sys.stdin).get("access_token",""))')"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@terraform/environments/dev/scripts/k8s-worker-init.sh` around lines 81 - 84,
The current sed-based extraction of access_token from token_response is fragile;
change the access token parsing to use a proper JSON parser instead of sed:
replace the access_token assignment that uses sed with a jq-based extraction
(e.g., parse token_response with jq -r '.access_token') or a small python3
one-liner that loads JSON and prints ["access_token"]; ensure the new command
reads from the token_response variable (or pipes curl directly) and fails fast
if the key is missing so kubelet image pull auth won’t get an empty token.
| var.create_storage_buckets ? tomap({ | ||
| static_assets = { | ||
| name = "${var.project}-${var.environment}" | ||
| storage_class = "STANDARD" | ||
| uniform_bucket_level_access = true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "storage 모듈의 버킷 이름 사용 방식:"
rg -n -C3 'resource\s+"google_storage_bucket"|name\s*=\s*each\.value\.name' terraform/modules/storage
echo
echo "dev 환경의 bucket name 선언:"
sed -n '16,75p' terraform/environments/dev/storage.tfRepository: PinHouse/PinHouse_CLOUD
Length of output: 3457
GCS 전역 네임스페이스에서 버킷 이름이 충돌 위험에 노출되어 있습니다.
스토리지 모듈이 name 값을 GCS 버킷 이름에 직접 사용하므로, pinhouse-dev, pinhouse-dev-loki, pinhouse-dev-tempo 같은 이름들이 다른 프로젝트/조직과 쉽게 충돌할 수 있습니다. 버킷 이름에 project_id 또는 고정 suffix를 추가하여 전역 고유성을 보장해주세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@terraform/environments/dev/storage.tf` around lines 19 - 23, 현재 storage 모듈이
static_assets.name 등에서 "${var.project}-${var.environment}"을 GCS 버킷 이름으로 바로 사용해
전역 충돌 위험이 있으니, 버킷 이름 생성 로직(예: static_assets.name 및 다른 버킷 name 필드)에서 반드시 고유 식별자를
추가하도록 변경하세요; 예를 들어 var.project_id 또는 data.google_project.<...>.project_id 를 접두사나
접미사로 결합(또는 고정 suffix like "-bucket")하여
"${var.project}-${var.environment}-${var.project_id}"처럼 전역 고유성을 보장하고, 생성 규칙에서
허용되지 않는 문자(대문자/언더스코어 등)가 있으면 소문자/하이픈으로 정규화하도록 업데이트하세요.
| project_id = "your-prod-gcp-project-id" | ||
| project = "pinhouse" |
There was a problem hiding this comment.
dev 예제에 prod 식별자가 남아 있습니다.
Line 4 placeholder와 저장소 설명이 아직 prod 기준이라, 예제를 그대로 복사해 시작하는 흐름에서 오해를 만들기 쉽습니다. 특히 project_id는 잘못 복사되면 dev plan/apply가 프로덕션 프로젝트를 향할 수 있습니다.
수정 예시
-project_id = "your-prod-gcp-project-id"
+project_id = "your-dev-gcp-project-id"
@@
- description = "프로덕션 환경용 프런트엔드 이미지 저장소"
+ description = "개발 환경용 프런트엔드 이미지 저장소"
@@
- description = "프로덕션 환경용 백엔드 이미지 저장소"
+ description = "개발 환경용 백엔드 이미지 저장소"Also applies to: 30-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@terraform/environments/dev/terraform.tfvars.example` around lines 4 - 5,
Update the terraform example variables so they don't point to production:
replace the prod placeholder value for project_id with a dev-safe placeholder
(e.g., "your-dev-gcp-project-id") and adjust the repository
description/placeholder for project (the `project` and `project_id` entries) to
clearly indicate this file is for the dev environment; ensure the same change is
applied to the other occurrences referenced (the other example blocks around the
30-45 range) so no example terraform.tfvars still references production
identifiers.
| "Dev_BE_DB_URL" | ||
| ] | ||
|
|
||
| ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python -m pip install --quiet python-hcl2
python - <<'PY'
from pathlib import Path
import hcl2
path = Path("terraform/environments/dev/terraform.tfvars.example")
with path.open() as fp:
hcl2.load(fp)
print("HCL parse OK")
PYRepository: PinHouse/PinHouse_CLOUD
Length of output: 4173
🏁 Script executed:
cat -n terraform/environments/dev/terraform.tfvars.example | sed -n '90,100p'Repository: PinHouse/PinHouse_CLOUD
Length of output: 508
Line 95는 HCL 주석 문법이 없어서 terraform.tfvars.example이 유효하지 않습니다.
현재 텍스트는 # 기호 없이 시작하기 때문에 HCL2 파서가 이 줄을 구문 분석할 수 없습니다. 이를 수정하려면 줄의 시작에 #을 추가하세요.
-ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다.
+# ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다. | |
| # ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@terraform/environments/dev/terraform.tfvars.example` at line 95, The line
"ESO 또는 특정 서비스 계정에 접근 권한을 줄 때만 아래 값을 채웁니다." is not a valid HCL comment and
breaks terraform.tfvars.example parsing; fix it by prefixing that exact line
with a hash character (#) so it becomes an HCL comment (i.e., add "#" at the
start of the line containing that Korean sentence) to restore valid HCL2 syntax.
| # Terraform 공통 설정 | ||
| # ======================================== | ||
| terraform { | ||
| required_version = ">= 1.0" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# 버전 제약과 optional 타입 사용 위치를 함께 확인
rg -n 'required_version|optional\\(' terraform/environments/devRepository: PinHouse/PinHouse_CLOUD
Length of output: 173
🏁 Script executed:
#!/bin/bash
# Search for required_version and optional() separately with proper syntax
echo "=== Checking required_version ==="
rg -n 'required_version' terraform/environments/dev/
echo -e "\n=== Checking for optional() usage ==="
rg -n 'optional(' terraform/environments/dev/Repository: PinHouse/PinHouse_CLOUD
Length of output: 296
🌐 Web query:
Terraform optional() function type constraint minimum version requirement
💡 Result:
The optional function in Terraform type constraints requires Terraform version 1.3.0 or later. It was introduced as a stable feature in Terraform 1.3.0, graduating from the experimental module_variable_optional_attrs feature that began in Terraform 0.14.0. Prior to 1.3.0, an experimental opt-in was required in the terraform block: terraform { experiments = [module_variable_optional_attrs] } This experiment is no longer supported after 1.3.0; modules using optional should declare required_version = ">= 1.3.0" in their terraform block. The official documentation confirms optional usage in object type constraints, such as: variable "example" { type = object({ required = string optional = optional(number, 42) }) } As of 2026-04-11, this remains the minimum version, with no changes in later releases like 1.14.
Citations:
- 1: https://developer.hashicorp.com/terraform/language/expressions/type-constraints
- 2: https://github.com/hashicorp/terraform/blob/v1.3.1/CHANGELOG.md
- 3: https://github.com/hashicorp/terraform/releases/tag/v1.3.0
- 4: https://developer.hashicorp.com/terraform/language/v1.3.x/upgrade-guides
🏁 Script executed:
#!/bin/bash
# Search for optional string in variables.tf without regex special chars
echo "=== Listing files in terraform/environments/dev/ ==="
ls -la terraform/environments/dev/
echo -e "\n=== Searching for 'optional' string in variables.tf ==="
rg 'optional' terraform/environments/dev/variables.tf 2>/dev/null || echo "No match or file not found"
echo -e "\n=== Showing variables.tf content ==="
cat terraform/environments/dev/variables.tf 2>/dev/null || echo "File not found"Repository: PinHouse/PinHouse_CLOUD
Length of output: 9853
required_version = ">= 1.0"은(는) 코드에서 사용된 optional() 기능을 지원하지 않습니다.
이 PR의 artifact_registry_repositories 변수에서 object 타입 제약 내 optional(string), optional(bool), optional(map(string))을 사용하고 있습니다. optional() 함수는 Terraform 1.3.0부터 정식 지원되므로, 최소 버전을 명확히 설정해야 합니다.
권장 수정안
- required_version = ">= 1.0"
+ required_version = ">= 1.3.0"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| required_version = ">= 1.0" | |
| required_version = ">= 1.3.0" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@terraform/environments/dev/versions.tf` at line 5, The Terraform minimum
required_version is too low for the optional() type used in the
artifact_registry_repositories variable; update the required_version declaration
in versions.tf from ">= 1.0" to at least ">= 1.3.0" so Terraform supports
optional(string), optional(bool), and optional(map(string)) used in the variable
definition (search for artifact_registry_repositories and optional(...) usages
to locate the impacted code).
📌 작업한 내용
🔍 배포 환경 완성
🖼️ 스크린샷
🔗 관련 이슈
#56 (dev 환경 IaC 구축)
✅ 체크리스트
Summary by CodeRabbit
개발 환경 설정
새로운 기능
인프라