Skip to content

chore: Platform Chart 내부로 모니터링 구성 통합#42

Closed
doup2001 wants to merge 2 commits intomainfrom
chore/#40
Closed

chore: Platform Chart 내부로 모니터링 구성 통합#42
doup2001 wants to merge 2 commits intomainfrom
chore/#40

Conversation

@doup2001
Copy link
Copy Markdown
Member

@doup2001 doup2001 commented Apr 9, 2026

📌 작업한 내용

  • 모니터링 관련 리소스를 Platform 메인 Helm 차트 내부로 이동.
  • Platform 차트의 subchart 또는 templates 디렉토리로 구성 재배치.

🔍 참고 사항

  • Platform 차트에서 모니터링을 하위 구성 요소로 통합하여 단일 배포 지점 관리.
  • dependencies 또는 umbrella chart 구조로 모듈화된 배포 흐름 개선.
  • ArgoCD Application에서 단일 Platform 차트만 참조하도록 단순화.

🖼️ 스크린샷

(해당 사항 없음)

🔗 관련 이슈

#40

✅ 체크리스트

  • 로컬에서 빌드 및 테스트 완료
  • 코드 리뷰 반영 완료
  • 문서화 필요 여부 확인

Summary by CodeRabbit

릴리스 노트

  • Chores
    • 모니터링 네임스페이스 및 Grafana 배포 설정 추가
    • 게이트웨이 리스너 구성 업데이트 (Argo 비활성화, Grafana 지원 추가)
    • 모니터링 배포 구성 재정리

@doup2001 doup2001 self-assigned this Apr 9, 2026
@doup2001 doup2001 added the chore 빌드, 배포, CI/CD 등 인프라 관련 작업 label Apr 9, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 9, 2026

Warning

Rate limit exceeded

@doup2001 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 1 minutes and 58 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 1 minutes and 58 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a74fdb25-129f-4666-a9a0-fb28e3891379

📥 Commits

Reviewing files that changed from the base of the PR and between 98f8cba and 4007c57.

📒 Files selected for processing (1)
  • k8s-helm/platform-chart/templates/monitoring/grafana-httproute.yaml

개요

Argo CD Application 매니페스트와 Kubernetes HTTPRoute 리소스를 제거하고, Helm 값 파일에 모니터링 구성 블록을 추가하여 Grafana 배포 설정을 변경했습니다.

변경 사항

Cohort / File(s) 요약
Argo CD 애플리케이션 및 Kustomize 제거
k8s-argocd/applications/prod/platform/monitoring-httproute.yaml, k8s-kustomize/platform/monitoring/httproute.yaml, k8s-kustomize/platform/monitoring/kustomization.yaml
platform-monitoring-httproute-prod Argo CD Application 매니페스트(42줄), Grafana HTTPRoute 리소스(27줄), 그리고 관련 Kustomize 설정 파일(9줄)을 완전히 제거했습니다.
Helm 값 구성 추가
k8s-helm/platform-chart/values.yaml, k8s-helm/platform-chart/values-prod.yaml
monitoring 네임스페이스 구성과 Grafana 설정(enabled, domain, serviceName, servicePort)을 추가했습니다. 기본 values 파일에서 gateway.listeners.argo.enabledtrue에서 false로 변경했습니다.

예상 코드 리뷰 난이도

🎯 2 (Simple) | ⏱️ ~10분

관련된 PR

🐰 ✨ 옛 경로들은 사라지고,
새로운 설정이 자리 잡네요!
Argo는 쉬고, Helm이 노래하며,
Grafana를 더 깔끔하게 배치했답니다. 🌙

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed PR 제목이 모니터링 구성을 Platform 차트 내부로 통합하는 변경사항을 명확하게 요약하고 있습니다.
Description check ✅ Passed PR 설명이 템플릿의 모든 필수 섹션을 포함하고 있으며, 변경사항, 참고 사항, 관련 이슈, 체크리스트가 상세하게 작성되어 있습니다.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch chore/#40

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
k8s-helm/platform-chart/values.yaml (1)

75-79: Grafana 도메인 설정 중복으로 드리프트 위험이 있습니다.

gateway.listeners.grafana.domainmonitoring.grafana.domain이 같은 성격의 값을 이중 관리하고 있어, 한쪽만 변경되면 리스너/라우트 불일치가 발생할 수 있습니다. 단일 소스 기준으로 정리하는 것을 권장합니다.

Also applies to: 124-128

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s-helm/platform-chart/values.yaml` around lines 75 - 79, The values file
currently duplicates Grafana domain across grafana.domain,
gateway.listeners.grafana.domain and monitoring.grafana.domain which risks
drift; pick one canonical key (e.g., gateway.listeners.grafana.domain) and
update other entries (monitoring.grafana.domain and grafana.domain) to reference
that single source via a YAML anchor/alias or Helm templating (e.g., use {{
.Values.gateway.listeners.grafana.domain }}), and ensure tlsSecretName stays
aligned (or likewise referenced) so all Grafana-related configs (grafana.domain,
gateway.listeners.grafana.domain, monitoring.grafana.domain, tlsSecretName) are
derived from the single source.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@k8s-helm/platform-chart/values-prod.yaml`:
- Around line 149-154: The Grafana backend service referenced by
grafana.serviceName is incorrect and will break HTTPRoute; update the
grafana.serviceName value from "pinhouse-monitoring-core-grafana" to the actual
service name "monitoring-core-kube-prome-grafana" (or alternatively set the
monitoring-core release's Helm fullnameOverride to "pinhouse-monitoring-core")
so the service referenced by grafana.serviceName matches the
kube-prometheus-stack naming convention.

---

Nitpick comments:
In `@k8s-helm/platform-chart/values.yaml`:
- Around line 75-79: The values file currently duplicates Grafana domain across
grafana.domain, gateway.listeners.grafana.domain and monitoring.grafana.domain
which risks drift; pick one canonical key (e.g.,
gateway.listeners.grafana.domain) and update other entries
(monitoring.grafana.domain and grafana.domain) to reference that single source
via a YAML anchor/alias or Helm templating (e.g., use {{
.Values.gateway.listeners.grafana.domain }}), and ensure tlsSecretName stays
aligned (or likewise referenced) so all Grafana-related configs (grafana.domain,
gateway.listeners.grafana.domain, monitoring.grafana.domain, tlsSecretName) are
derived from the single source.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 56cbd5db-6a49-41c9-bb74-793533f61ac7

📥 Commits

Reviewing files that changed from the base of the PR and between 2e4a716 and 98f8cba.

📒 Files selected for processing (5)
  • k8s-argocd/applications/prod/platform/monitoring-httproute.yaml
  • k8s-helm/platform-chart/values-prod.yaml
  • k8s-helm/platform-chart/values.yaml
  • k8s-kustomize/platform/monitoring/httproute.yaml
  • k8s-kustomize/platform/monitoring/kustomization.yaml
💤 Files with no reviewable changes (3)
  • k8s-kustomize/platform/monitoring/httproute.yaml
  • k8s-argocd/applications/prod/platform/monitoring-httproute.yaml
  • k8s-kustomize/platform/monitoring/kustomization.yaml

Comment on lines +149 to +154
namespace: monitoring
grafana:
enabled: true
domain: grafana.pinhouse.co.kr
serviceName: pinhouse-monitoring-core-grafana
servicePort: 80
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] monitoring-core 릴리스에서 Grafana 서비스명 단서 검색"
rg -n --iglob '*.yaml' 'monitoring-core-grafana|fullnameOverride|nameOverride|service' k8s-helm/releases/monitoring-core

echo "[2] platform-chart 템플릿이 serviceName 값을 backendRef에 연결하는지 확인"
rg -n --iglob '*.yaml' '\.Values\.monitoring\.grafana\.serviceName|backendRefs|name:|port:' k8s-helm/platform-chart/templates

Repository: PinHouse/PinHouse_CLOUD

Length of output: 10274


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] monitoring-core values.yaml에서 Grafana 설정 확인"
cat -n k8s-helm/releases/monitoring-core/values.yaml | grep -A 20 "^[[:space:]]*32"

echo "[2] monitoring-core templates에서 Grafana Service 정의 찾기"
fd -t f . k8s-helm/releases/monitoring-core/templates -name "*.yaml" | head -20

echo "[3] monitoring-core chart 구조 확인"
ls -la k8s-helm/releases/monitoring-core/

echo "[4] monitoring-core의 Chart.yaml 확인"
cat k8s-helm/releases/monitoring-core/Chart.yaml

Repository: PinHouse/PinHouse_CLOUD

Length of output: 998


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] monitoring-core Chart 정보"
find k8s-helm/releases/monitoring-core -name "Chart.yaml" -o -name "Chart.yml" | xargs cat

echo "[2] monitoring-core values.yaml 전체 구조 확인 (Grafana 섹션)"
rg -A 5 "grafana:" k8s-helm/releases/monitoring-core/values.yaml

echo "[3] monitoring-core deployment 시 사용하는 release name 찾기"
rg -n "monitoring-core" k8s-helm/ --iglob "*.yaml" --iglob "*.yml" | grep -E "(release|name:|helm)" | head -20

echo "[4] platform-chart의 monitoring namespace 설정 확인"
cat -n k8s-helm/platform-chart/values-prod.yaml | sed -n '145,160p'

Repository: PinHouse/PinHouse_CLOUD

Length of output: 1574


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] monitoring-core values.yaml에서 fullnameOverride/nameOverride 검색"
rg -n "fullnameOverride|nameOverride" k8s-helm/releases/monitoring-core/

echo "[2] monitoring-core의 kube-prometheus-stack 설정 확인"
rg -B 2 -A 15 "kube-prometheus-stack:" k8s-helm/releases/monitoring-core/values.yaml | head -50

echo "[3] monitoring-core values-prod-gitops.yaml에서 Grafana 관련 설정"
rg -B 2 -A 10 "grafana:" k8s-helm/releases/monitoring-core/values-prod-gitops.yaml

Repository: PinHouse/PinHouse_CLOUD

Length of output: 133


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] monitoring-core의 전체 values.yaml 내용 (처음 100줄)"
head -100 k8s-helm/releases/monitoring-core/values.yaml

echo "[2] helmfile 또는 ArgoCD 설정에서 monitoring-core release name 확인"
rg -n "monitoring-core" k8s-helm/ -t yaml | grep -E "(release:|name:|chart:)" | head -15

echo "[3] monitoring-core/values-prod-gitops.yaml 전체 확인"
cat k8s-helm/releases/monitoring-core/values-prod-gitops.yaml | head -80

Repository: PinHouse/PinHouse_CLOUD

Length of output: 4519


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] 프로젝트 루트 구조 확인"
ls -la k8s-helm/ | head -20

echo "[2] helmfile 또는 배포 설정 찾기"
find k8s-helm -name "helmfile*" -o -name "kustomization.yaml" -o -name "*.tf" | head -10

echo "[3] monitoring-core 배포 관련 설정"
rg -n "release:" k8s-helm/ --iglob "*.yaml" | grep -i monitoring | head -10

echo "[4] kube-prometheus-stack에서 grafana service 이름이 어떻게 결정되는지 확인"
rg -n "kube-prome|fullName" k8s-helm/releases/monitoring-core/ --iglob "*.yaml"

Repository: PinHouse/PinHouse_CLOUD

Length of output: 436


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] monitoring-alloy values에서 Prometheus 접근 방식 확인"
rg -B 3 -A 3 "monitoring-core-kube-prome" k8s-helm/releases/monitoring-alloy/values.yaml

echo "[2] monitoring-core 배포 시 릴리스명 파악 (argocd app 또는 deploy script)"
find . -name "*.yaml" -o -name "*.yml" -o -name "*.sh" | xargs grep -l "monitoring-core" 2>/dev/null | head -10

echo "[3] kube-prometheus-stack 기본 서비스명 패턴 확인"
rg -n "service:" k8s-helm/releases/monitoring-core/values.yaml -A 5 | head -30

Repository: PinHouse/PinHouse_CLOUD

Length of output: 736


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] ArgoCD monitoring-core app 정의 확인"
cat k8s-argocd/applications/prod/monitoring/monitoring-core.yaml

echo "[2] kube-prometheus-stack 섹션에서 fullnameOverride 최종 확인"
rg -n "fullnameOverride|grafana:" k8s-helm/releases/monitoring-core/values.yaml -A 3

Repository: PinHouse/PinHouse_CLOUD

Length of output: 1309


serviceName 값이 실제 Grafana Service 이름과 일치하지 않습니다. HTTPRoute 백엔드가 실패하게 됩니다.

현재 설정:

  • ArgoCD 배포에서 릴리스명: monitoring-core
  • kube-prometheus-stack 네이밍 규칙: {{ releaseName }}-kube-prome-{{ component }}
  • 따라서 실제 Grafana Service 이름: monitoring-core-kube-prome-grafana

그런데 platform-chart의 values 파일에는:

  • serviceName: pinhouse-monitoring-core-grafana

이 값으로 인해 HTTPRoute가 존재하지 않는 백엔드 서비스를 찾게 되어 런타임에 503 에러가 발생합니다.

수정 필요:
serviceNamemonitoring-core-kube-prome-grafana 로 변경하거나, monitoring-core 릴리스의 fullnameOverride를 pinhouse-monitoring-core 로 설정하세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s-helm/platform-chart/values-prod.yaml` around lines 149 - 154, The Grafana
backend service referenced by grafana.serviceName is incorrect and will break
HTTPRoute; update the grafana.serviceName value from
"pinhouse-monitoring-core-grafana" to the actual service name
"monitoring-core-kube-prome-grafana" (or alternatively set the monitoring-core
release's Helm fullnameOverride to "pinhouse-monitoring-core") so the service
referenced by grafana.serviceName matches the kube-prometheus-stack naming
convention.

@doup2001 doup2001 closed this Apr 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chore 빌드, 배포, CI/CD 등 인프라 관련 작업

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant