Imported Translations from Zanata

For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: I916ad35e4b1034ce78473526d6a2a43921387661
This commit is contained in:
OpenStack Proposal Bot 2018-09-30 06:37:59 +00:00
parent c573ba198f
commit 12a4df5a5c
5 changed files with 4015 additions and 0 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,732 @@
# suhartono <cloudsuhartono@gmail.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: openstack-helm\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-09-29 05:49+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-09-25 05:17+0000\n"
"Last-Translator: suhartono <cloudsuhartono@gmail.com>\n"
"Language-Team: Indonesian\n"
"Language: id\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=1; plural=0\n"
msgid "3 Node (VM based) env."
msgstr "3 Node (VM based) env."
msgid ""
"5. Replace the failed disk with a new one. If you repair (not replace) the "
"failed disk, you may need to run the following:"
msgstr ""
"5. Ganti disk yang gagal dengan yang baru. Jika Anda memperbaiki (bukan "
"mengganti) disk yang gagal, Anda mungkin perlu menjalankan yang berikut:"
msgid ""
"A Ceph Monitor running on voyager3 (whose Monitor database is destroyed) "
"becomes out of quorum, and the mon-pod's status stays in ``Running`` -> "
"``Error`` -> ``CrashLoopBackOff`` while keeps restarting."
msgstr ""
"Monitor Ceph yang berjalan di voyager3 (yang database Monitor-nya dimatikan) "
"menjadi tidak di quorum, dan status mon-pod tetap dalam ``Running`` -> "
"``Error`` -> ``CrashLoopBackOff`` sementara terus restart."
msgid "Adding Tests"
msgstr "Menambahkan Tes"
msgid ""
"Additional information on Helm tests for OpenStack-Helm and how to execute "
"these tests locally via the scripts used in the gate can be found in the "
"gates_ directory."
msgstr ""
"Informasi tambahan tentang tes Helm untuk OpenStack-Helm dan cara melakukan "
"tes ini secara lokal melalui skrip yang digunakan di gate dapat ditemukan di "
"direktori gates_."
msgid ""
"After 10+ miniutes, Ceph starts rebalancing with one node lost (i.e., 6 osds "
"down) and the status stablizes with 18 osds."
msgstr ""
"Setelah 10+ menit, Ceph mulai menyeimbangkan kembali dengan satu node yang "
"hilang (yaitu, 6 osds turun) dan statusnya stabil dengan 18 osds."
msgid "After reboot (node voyager3), the node status changes to ``NotReady``."
msgstr ""
"Setelah reboot (node voyager3), status node berubah menjadi ``NotReady``."
msgid ""
"After the host is down (node voyager3), the node status changes to "
"``NotReady``."
msgstr ""
"Setelah host mati (node voyager3), status node berubah menjadi ``NotReady``."
msgid ""
"All tests should be added to the gates during development, and are required "
"for any new service charts prior to merging. All Helm tests should be "
"included as part of the deployment script. An example of this can be seen "
"in this script_."
msgstr ""
"Semua tes harus ditambahkan ke gate selama pengembangan, dan diperlukan "
"untuk chart layanan baru sebelum penggabungan. Semua tes Helm harus "
"dimasukkan sebagai bagian dari skrip pemasangan. Contoh ini dapat dilihat "
"dalam skrip ini_."
msgid ""
"Also, the pod status of ceph-mon and ceph-osd changes from ``NodeLost`` back "
"to ``Running``."
msgstr ""
"Juga, status pod ceph-mon dan ceph-osd berubah dari ``NodeLost`` kembali ke "
"``Running``."
msgid "Any Helm tests associated with a chart can be run by executing:"
msgstr ""
"Tes Helm apa pun yang terkait dengan chart dapat dijalankan dengan "
"mengeksekusi:"
msgid ""
"Any templates for Helm tests submitted should follow the philosophies "
"applied in the other templates. These include: use of overrides where "
"appropriate, use of endpoint lookups and other common functionality in helm-"
"toolkit, and mounting any required scripting templates via the configmap-bin "
"template for the service chart. If Rally tests are not appropriate or "
"adequate for a service chart, any additional tests should be documented "
"appropriately and adhere to the same expectations."
msgstr ""
"Setiap template untuk tes Helm yang diajukan harus mengikuti filosofi yang "
"diterapkan dalam template lain. Ini termasuk: penggunaan menimpa di mana "
"yang sesuai, penggunaan pencarian endpoint dan fungsi umum lainnya dalam "
"helm-toolkit, dan pemasangan semua scripting template yang diperlukan "
"melalui template configmap-bin untuk chart layanan. Jika pengujian Rally "
"tidak sesuai atau memadai untuk chart layanan, pengujian tambahan apa pun "
"harus didokumentasikan dengan tepat dan mematuhi harapan yang sama."
msgid "Capture Ceph pods statuses."
msgstr "Capture Ceph pods statuses."
msgid "Capture Openstack pods statuses."
msgstr "Capture Openstack pods statuses."
msgid "Capture final Ceph pod statuses:"
msgstr "Capture final Ceph pod statuses:"
msgid "Capture final Openstack pod statuses:"
msgstr "Capture final Openstack pod statuses:"
msgid "Case: 1 out of 3 Monitor Processes is Down"
msgstr "Kasus: 1 dari 3 Proses Monitor Sedang Turun"
msgid "Case: 2 out of 3 Monitor Processes are Down"
msgstr "Kasus: 2 dari 3 Proses Monitor Sedang Turun"
msgid "Case: 3 out of 3 Monitor Processes are Down"
msgstr "Kasus: 3 dari 3 Proses Monitor Sedang Turun"
msgid "Case: A OSD pod is deleted"
msgstr "Kasus: Pod OSD dihapus"
msgid "Case: A disk fails"
msgstr "Kasus: Disk gagal"
msgid "Case: A host machine where ceph-mon is running is down"
msgstr "Kasus: Mesin host di mana ceph-mon sedang bekerja sedang mati"
msgid "Case: Monitor database is destroyed"
msgstr "Kasus: Database monitor dimusnahkan"
msgid "Case: OSD processes are killed"
msgstr "Kasus: Proses OSD dimatikan"
msgid "Case: One host machine where ceph-mon is running is rebooted"
msgstr "Kasus: Satu mesin host di mana ceph-mon sedang dijalankan di-reboot"
msgid "Caveats:"
msgstr "Caveats:"
msgid "Ceph Cephfs provisioner docker images."
msgstr "Ceph Cephfs provisioner docker images."
msgid "Ceph Luminous point release images for Ceph components"
msgstr "Ceph Luminous point melepaskan image untuk komponen Ceph"
msgid "Ceph RBD provisioner docker images."
msgstr "Ceph RBD provisioner docker images."
msgid "Ceph Resiliency"
msgstr "Ceph Resiliency"
msgid "Ceph Upgrade"
msgstr "Ceph Upgrade"
msgid ""
"Ceph can be upgreade without downtime for Openstack components in a multinoe "
"env."
msgstr ""
"Ceph dapat ditingkatkan tanpa downtime untuk komponen OpenStack dalam env "
"multinode."
msgid ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, 6 osds running on ``voyager3`` are down (i.e., 18 out of 24 "
"osds are up). Some placement groups become degraded and undersized."
msgstr ""
"Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` "
"menjadi tidak dapat digunakan. Juga, 6 osds yang berjalan pada ``voyager3`` "
"sedang down (yaitu, 18 dari 24 osds naik). Beberapa kelompok penempatan "
"menjadi terdegradasi dan berukuran kecil."
msgid ""
"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of "
"quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are "
"up out of 24 osds."
msgstr ""
"Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` "
"menjadi tidak dapat digunakan. Juga, enam osds yang berjalan di ``voyager3`` "
"turun; yaitu, 18 osds naik dari 24 osds."
msgid "Ceph version: 12.2.3"
msgstr "Ceph versi: 12.2.3"
msgid "Check Ceph Pods"
msgstr "Periksa Ceph Pods"
msgid "Check version of each Ceph components."
msgstr "Periksa versi setiap komponen Ceph."
msgid "Check which images Provisionors and Mon-Check PODs are using"
msgstr "Periksa image mana yang digunakan Provisionors dan Mon-Check PODs"
msgid "Cluster size: 4 host machines"
msgstr "Ukuran cluster: 4 mesin host"
msgid "Conclusion:"
msgstr "Kesimpulan:"
msgid "Confirm Ceph component's version."
msgstr "Konfirmasi versi komponen Ceph."
msgid "Continue with OSH multinode guide to install other Openstack charts."
msgstr ""
"Lanjutkan dengan panduan multinode OSH untuk menginstal chart Openstack "
"lainnya."
msgid "Deploy and Validate Ceph"
msgstr "Menyebarkan dan Memvalidasi Ceph"
msgid "Disk Failure"
msgstr "Kegagalan Disk"
msgid "Docker Images:"
msgstr "Docker Images:"
msgid ""
"Every OpenStack-Helm chart should include any required Helm tests necessary "
"to provide a sanity check for the OpenStack service. Information on using "
"the Helm testing framework can be found in the Helm repository_. Currently, "
"the Rally testing framework is used to provide these checks for the core "
"services. The Keystone Helm test template can be used as a reference, and "
"can be found here_."
msgstr ""
"Setiap OpenStack-Helm chart harus menyertakan tes Helm yang diperlukan untuk "
"memberikan pemeriksaan (sanity check) kewarasan untuk layanan OpenStack. "
"Informasi tentang menggunakan kerangka pengujian Helm dapat ditemukan di "
"repositori Helm. Saat ini, kerangka pengujian Rally digunakan untuk "
"menyediakan pemeriksaan ini untuk layanan inti. Kerangka uji Keystone Helm "
"dapat digunakan sebagai referensi, dan dapat ditemukan di sini_."
msgid "Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):"
msgstr "Temukan bahwa Ceph sehat dengan OSD yang hilang (yaitu, total 23 OSD):"
msgid "Follow all steps from OSH multinode guide with below changes."
msgstr ""
"Ikuti semua langkah dari panduan multinode OSH dengan perubahan di bawah ini."
msgid ""
"Followed OSH multinode guide steps to setup nodes and install k8 cluster"
msgstr ""
"Mengikuti langkah-langkah panduan multinode OSH untuk mengatur node dan "
"menginstal k8 cluster"
msgid "Followed OSH multinode guide steps upto Ceph install"
msgstr "Mengikuti panduan multinode OSH langkah-langkah upto Ceph menginstal"
msgid "Following is a partial part from script to show changes."
msgstr ""
"Berikut ini adalah bagian parsial dari skrip untuk menunjukkan perubahan."
msgid ""
"From the Kubernetes cluster, remove the failed OSD pod, which is running on "
"``voyager4``:"
msgstr ""
"Dari kluster Kubernetes, hapus pod OSD yang gagal, yang berjalan di "
"``voyager4``:"
msgid "Hardware Failure"
msgstr "Kegagalan perangkat keras"
msgid "Helm Tests"
msgstr "Tes Helm"
msgid "Host Failure"
msgstr "Host Failure"
msgid ""
"In the mean time, we monitor the status of Ceph and noted that it takes "
"about 30 seconds for the 6 OSDs to recover from ``down`` to ``up``. The "
"reason is that Kubernetes automatically restarts OSD pods whenever they are "
"killed."
msgstr ""
"Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan "
"sekitar 30 detik untuk 6 OSD untuk memulihkan dari ``down`` ke ``up``. "
"Alasannya adalah Kubernetes secara otomatis merestart pod OSD setiap kali "
"mereka dimatikan."
msgid ""
"In the mean time, we monitored the status of Ceph and noted that it takes "
"about 24 seconds for the killed Monitor process to recover from ``down`` to "
"``up``. The reason is that Kubernetes automatically restarts pods whenever "
"they are killed."
msgstr ""
"Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan "
"sekitar 24 detik untuk proses Monitor yang mati untuk memulihkan dari "
"``down`` ke ``up``. Alasannya adalah Kubernetes secara otomatis me-restart "
"pod setiap kali mereka dimatikan."
msgid "Install Ceph charts (12.2.4) by updating Docker images in overrides."
msgstr ""
"Instal Ceph charts (12.2.4) dengan memperbarui Docker images di overrides."
msgid "Install Ceph charts (version 12.2.4)"
msgstr "Pasang chart Ceph (versi 12.2.4)"
msgid "Install OSH components as per OSH multinode guide."
msgstr "Instal komponen OSH sesuai panduan multinode OSH."
msgid "Install Openstack charts"
msgstr "Pasang chart Openstack"
msgid ""
"It takes longer (about 1 minute) for the killed Monitor processes to recover "
"from ``down`` to ``up``."
msgstr ""
"Diperlukan waktu lebih lama (sekitar 1 menit) untuk proses Monitor yang mati "
"untuk memulihkan dari ``down`` ke ``up``."
msgid "Kubernetes version: 1.10.5"
msgstr "Kubernetes versi: 1.10.5"
msgid "Kubernetes version: 1.9.3"
msgstr "Kubernetes version: 1.9.3"
msgid "Mission"
msgstr "Misi"
msgid "Monitor Failure"
msgstr "Memantau Kegagalan"
msgid ""
"Note: To find the daemonset associated with a failed OSD, check out the "
"followings:"
msgstr ""
"Catatan: Untuk menemukan daemon yang terkait dengan OSD yang gagal, periksa "
"yang berikut:"
msgid "Number of disks: 24 (= 6 disks per host * 4 hosts)"
msgstr "Jumlah disk: 24 (= 6 disk per host * 4 host)"
msgid "OSD Failure"
msgstr "Kegagalan OSD"
msgid "OSD count is set to 3 based on env setup."
msgstr "Penghitungan OSD diatur ke 3 berdasarkan pada env setup."
msgid "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
msgstr "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459"
msgid "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb"
msgstr "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb"
msgid ""
"Our focus lies on resiliency for various failure scenarios but not on "
"performance or stress testing."
msgstr ""
"Fokus kami terletak pada ketahanan untuk berbagai skenario kegagalan tetapi "
"tidak pada kinerja atau stress testing."
msgid "Plan:"
msgstr "Rencana:"
msgid "Recovery:"
msgstr "Pemulihan:"
msgid ""
"Remove the entire ceph-mon directory on voyager3, and then Ceph will "
"automatically recreate the database by using the other ceph-mons' database."
msgstr ""
"Hapus seluruh direktori ceph-mon di voyager3, dan kemudian Ceph akan secara "
"otomatis membuat ulang database dengan menggunakan database ceph-mons "
"lainnya."
msgid ""
"Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:"
msgstr "Hapus OSD yang gagal (OSD ID = 2 dalam contoh ini) dari kluster Ceph:"
msgid "Resiliency Tests for OpenStack-Helm/Ceph"
msgstr "Tes Ketahanan untuk OpenStack-Helm/Ceph"
msgid "Running Tests"
msgstr "Menjalankan Tes"
msgid "Setup:"
msgstr "Mempersiapkan:"
msgid ""
"Showing partial output from kubectl describe command to show which image is "
"Docker container is using"
msgstr ""
"Menampilkan sebagian output dari kubectl menggambarkan perintah untuk "
"menunjukkan image mana yang digunakan oleh container Docker"
msgid "Software Failure"
msgstr "Kegagalan Perangkat Lunak"
msgid "Solution:"
msgstr "Solusi:"
msgid "Start a new OSD pod on ``voyager4``:"
msgstr "Mulai pod LED baru pada ``voyager 4``:"
msgid "Steps:"
msgstr "Langkah:"
msgid "Symptom:"
msgstr "Gejala:"
msgid "Test Environment"
msgstr "Uji Lingkungan"
msgid "Test Scenario:"
msgstr "Test Scenario:"
msgid "Testing"
msgstr "Pengujian"
msgid "Testing Expectations"
msgstr "Menguji Ekspektasi"
msgid ""
"The goal of our resiliency tests for `OpenStack-Helm/Ceph <https://github."
"com/openstack/openstack-helm/tree/master/ceph>`_ is to show symptoms of "
"software/hardware failure and provide the solutions."
msgstr ""
"Tujuan dari uji ketahanan kami untuk `OpenStack-Helm/Ceph <https://github."
"com/openstack/openstack-helm/tree/master/ceph>`_ adalah untuk menunjukkan "
"gejala kegagalan perangkat lunak/perangkat keras dan memberikan solusi."
msgid ""
"The logs of the failed mon-pod shows the ceph-mon process cannot run as ``/"
"var/lib/ceph/mon/ceph-voyager3/store.db`` does not exist."
msgstr ""
"Log dari mon-pod gagal menunjukkan proses ceph-mon tidak dapat berjalan "
"karena ``/var/lib/ceph/mon/ceph-voyager3/store.db`` tidak ada."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. Ceph status shows that "
"the monitor running on ``voyager3`` is now in quorum."
msgstr ""
"Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. "
"Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa "
"monitor yang dijalankan pada ``voyager3`` sekarang dalam kuorum."
msgid ""
"The node status of ``voyager3`` changes to ``Ready`` after the node is up "
"again. Also, Ceph pods are restarted automatically. The Ceph status shows "
"that the monitor running on ``voyager3`` is now in quorum and 6 osds gets "
"back up (i.e., a total of 24 osds are up)."
msgstr ""
"Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. "
"Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa "
"monitor yang berjalan pada ``voyager3`` sekarang berada di kuorum dan 6 osds "
"akan kembali (yaitu, total 24 osds naik)."
msgid ""
"The output of the Helm tests can be seen by looking at the logs of the pod "
"created by the Helm tests. These logs can be viewed with:"
msgstr ""
"Output dari tes Helm dapat dilihat dengan melihat log dari pod yang dibuat "
"oleh tes Helm. Log ini dapat dilihat dengan:"
msgid "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``."
msgstr "Status pod ceph-mon dan ceph-osd ditampilkan sebagai ``NodeLost``."
msgid ""
"The status of the pods (where the three Monitor processes are killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> "
"``Running`` and this recovery process takes about 1 minute."
msgstr ""
"Status pod (di mana ketiga proses Monitor dimatikan) diubah sebagai berikut: "
"``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses "
"pemulihan ini memakan waktu sekitar 1 menit."
msgid ""
"The status of the pods (where the two Monitor processes are killed) changed "
"as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` "
"and this recovery process takes about 1 minute."
msgstr ""
"Status pod (di mana kedua proses Monitor mati) diubah sebagai berikut: "
"``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses "
"pemulihan ini memakan waktu sekitar 1 menit."
msgid ""
"This guide documents steps showing Ceph version upgrade. The main goal of "
"this document is to demostrate Ceph chart update without downtime for OSH "
"components."
msgstr ""
"Panduan ini mendokumentasikan langkah-langkah yang menunjukkan upgrade versi "
"Ceph. Tujuan utama dari dokumen ini adalah untuk mendemonstrasikan pembaruan "
"Ceph chart tanpa downtime untuk komponen OSH."
msgid ""
"This is for the case when a host machine (where ceph-mon is running) is down."
msgstr ""
"Ini untuk kasus ketika mesin host (di mana ceph-mon sedang berjalan) sedang "
"mati."
msgid "This is to test a scenario when 1 out of 3 Monitor processes is down."
msgstr "Ini untuk menguji skenario ketika 1 dari 3 proses Monitor mati."
msgid ""
"This is to test a scenario when 2 out of 3 Monitor processes are down. To "
"bring down 2 Monitor processes (out of 3), we identify two Monitor processes "
"and kill them from the 2 monitor hosts (not a pod)."
msgstr ""
"Ini untuk menguji skenario ketika 2 dari 3 proses Monitor sedang down. Untuk "
"menurunkan 2 proses Monitor (dari 3), kami mengidentifikasi dua proses "
"Monitor dan mematikannya dari 2 monitor host (bukan pod)."
msgid ""
"This is to test a scenario when 3 out of 3 Monitor processes are down. To "
"bring down 3 Monitor processes (out of 3), we identify all 3 Monitor "
"processes and kill them from the 3 monitor hosts (not pods)."
msgstr ""
"Ini untuk menguji skenario ketika 3 dari 3 proses Monitor sedang down. Untuk "
"menurunkan 3 proses Monitor (dari 3), kami mengidentifikasi semua 3 proses "
"Monitor dan mematikannya dari 3 monitor host (bukan pod)."
msgid ""
"This is to test a scenario when a disk failure happens. We monitor the ceph "
"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a "
"backend is down."
msgstr ""
"Ini untuk menguji skenario ketika terjadi kegagalan disk. Kami memonitor "
"status ceph dan melihat satu OSD (osd.2) di voyager4 yang memiliki ``/dev/"
"sdh`` sebagai backend sedang down (mati)."
msgid ""
"This is to test a scenario when an OSD pod is deleted by ``kubectl delete "
"$OSD_POD_NAME``. Meanwhile, we monitor the status of Ceph and noted that it "
"takes about 90 seconds for the OSD running in deleted pod to recover from "
"``down`` to ``up``."
msgstr ""
"Ini untuk menguji skenario ketika pod OSD dihapus oleh ``kubectl delete "
"$OSD_POD_NAME``. Sementara itu, kami memantau status Ceph dan mencatat bahwa "
"dibutuhkan sekitar 90 detik untuk OSD yang berjalan di pod yang dihapus "
"untuk memulihkan dari ``down`` ke ``up``."
msgid "This is to test a scenario when some of the OSDs are down."
msgstr "Ini untuk menguji skenario ketika beberapa OSD turun."
msgid ""
"To bring down 1 Monitor process (out of 3), we identify a Monitor process "
"and kill it from the monitor host (not a pod)."
msgstr ""
"Untuk menurunkan 1 proses Monitor (dari 3), kami mengidentifikasi proses "
"Monitor dan mematikannya dari host monitor (bukan pod)."
msgid ""
"To bring down 6 OSDs (out of 24), we identify the OSD processes and kill "
"them from a storage host (not a pod)."
msgstr ""
"Untuk menurunkan 6 OSD (dari 24), kami mengidentifikasi proses OSD dan "
"mematikannya dari host penyimpanan (bukan pod)."
msgid "To replace the failed OSD, excecute the following procedure:"
msgstr "Untuk mengganti OSD yang gagal, jalankan prosedur berikut:"
msgid "Update Ceph Client chart with new overrides:"
msgstr "Perbarui Ceph Client chart dengan override baru:"
msgid "Update Ceph Mon chart with new overrides"
msgstr "Perbarui Ceph Mon chart dengan override baru"
msgid "Update Ceph OSD chart with new overrides:"
msgstr "Perbarui Ceph OSD chart dengan override baru:"
msgid "Update Ceph Provisioners chart with new overrides:"
msgstr "Perbarui Ceph Provisioners chart dengan override baru:"
msgid ""
"Update ceph install script ``./tools/deployment/multinode/030-ceph.sh`` to "
"add ``images:`` section in overrides as shown below."
msgstr ""
"Perbarui ceph install script ``./tools/deployment/multinode/030-ceph.sh`` "
"untuk menambahkan bagian ``images:`` di override seperti yang ditunjukkan di "
"bawah ini."
msgid ""
"Update, image section in new overrides ``ceph-update.yaml`` as shown below"
msgstr ""
"Pembaruan, bagian image di overrides baru ``ceph-update.yaml`` seperti yang "
"ditunjukkan di bawah ini"
msgid "Upgrade Ceph charts to update version"
msgstr "Tingkatkan Ceph charts untuk memperbarui versi"
msgid ""
"Upgrade Ceph charts to version 12.2.5 by updating docker images in overrides."
msgstr ""
"Tingkatkan Ceph chart ke versi 12.2.5 dengan memperbarui image docker di "
"overrides."
msgid ""
"Upgrade Ceph component version from ``12.2.4`` to ``12.2.5`` without "
"downtime to OSH components."
msgstr ""
"Upgrade versi komponen Ceph dari ``12.2.4`` ke ``12.2.5`` tanpa waktu henti "
"ke komponen OSH."
msgid ""
"Use Ceph override file ``ceph.yaml`` that was generated previously and "
"update images section as below"
msgstr ""
"Gunakan Ceph override file ``ceph.yaml`` yang telah dibuat sebelumnya dan "
"perbarui bagian image seperti di bawah ini"
msgid ""
"Validate the Ceph status (i.e., one OSD is added, so the total number of "
"OSDs becomes 24):"
msgstr ""
"Validasi status Ceph (yaitu satu OSD ditambahkan, sehingga jumlah total OSD "
"menjadi 24):"
msgid ""
"We also monitored the pod status through ``kubectl get pods -n ceph`` during "
"this process. The deleted OSD pod status changed as follows: ``Terminating`` "
"-> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> ``Running``, and this "
"process taks about 90 seconds. The reason is that Kubernetes automatically "
"restarts OSD pods whenever they are deleted."
msgstr ""
"Kami juga memantau status pod melalui ``kubectl get pods -n ceph`` selama "
"proses ini. Status pod OSD yang dihapus diubah sebagai berikut: "
"``Terminating`` -> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> "
"``Running``, dan proses ini memakan waktu sekitar 90 detik. Alasannya adalah "
"Kubernetes secara otomatis merestart pod OSD setiap kali dihapus."
msgid ""
"We also monitored the status of the Monitor pod through ``kubectl get pods -"
"n ceph``, and the status of the pod (where a Monitor process is killed) "
"changed as follows: ``Running`` -> ``Error`` -> ``Running`` and this "
"recovery process takes about 24 seconds."
msgstr ""
"Kami juga memantau status pod Monitor melalui ``kubectl get pods -n ceph``, "
"dan status pod (di mana proses Monitor mati) berubah sebagai berikut: "
"``Running`` -> ``Error`` -> ``Running`` dan proses pemulihan ini membutuhkan "
"waktu sekitar 24 detik."
msgid ""
"We have 3 Monitors in this Ceph cluster, one on each of the 3 Monitor hosts."
msgstr ""
"Kami memiliki 3 Monitor di cluster Ceph ini, satu di masing-masing dari 3 "
"host Monitor."
msgid ""
"We intentionlly destroy a Monitor database by removing ``/var/lib/openstack-"
"helm/ceph/mon/mon/ceph-voyager3/store.db``."
msgstr ""
"Kami bermaksud menghancurkan database Monitor dengan menghapus ``/var/lib/"
"openstack-helm/ceph/mon/mon/ceph-voyager3/store.db``."
msgid ""
"We monitored the status of Ceph Monitor pods and noted that the symptoms are "
"similar to when 1 or 2 Monitor processes are killed:"
msgstr ""
"Kami memantau status pod Ceph Monitor dan mencatat bahwa gejalanya mirip "
"dengan ketika 1 atau 2 proses Monitor dimatikan:"
msgid ""
"We monitored the status of Ceph when the Monitor processes are killed and "
"noted that the symptoms are similar to when 1 Monitor process is killed:"
msgstr ""
"Kami memantau status Ceph ketika proses Monitor dimatikan dan mencatat bahwa "
"gejala mirip dengan ketika 1 Proses monitor dimatikan:"
msgid "`Disk failure <./disk-failure.html>`_"
msgstr "`Disk failure <./disk-failure.html>`_"
msgid "`Host failure <./host-failure.html>`_"
msgstr "`Host failure <./host-failure.html>`_"
msgid "`Monitor failure <./monitor-failure.html>`_"
msgstr "`Monitor failure <./monitor-failure.html>`_"
msgid "`OSD failure <./osd-failure.html>`_"
msgstr "`OSD failure <./osd-failure.html>`_"
msgid ""
"``Results:`` All provisioner pods got terminated at once (same time). Other "
"ceph pods are running. No interruption to OSH pods."
msgstr ""
"``Results:`` Semua pod penyedia dihentikan sekaligus (saat yang sama). Ceph "
"pod lainnya sedang berjalan. Tidak ada gangguan pada pod OSH."
msgid ""
"``Results:`` Mon pods got updated one by one (rolling updates). Each Mon pod "
"got respawn and was in 1/1 running state before next Mon pod got updated. "
"Each Mon pod got restarted. Other ceph pods were not affected with this "
"update. No interruption to OSH pods."
msgstr ""
"``Results:`` Mon pod mendapat pembaruan satu per satu (pembaruan bergulir). "
"Setiap Mon pod mendapat respawn dan berada dalam 1/1 keadaan sebelum Mon pod "
"berikutnya diperbarui. Setiap Mon pod mulai dihidupkan ulang. Ceph pod "
"lainnya tidak terpengaruh dengan pembaruan ini. Tidak ada gangguan pada pod "
"OSH."
msgid ""
"``Results:`` Rolling updates (one pod at a time). Other ceph pods are "
"running. No interruption to OSH pods."
msgstr ""
"``Results:`` Bergulir pembaruan (satu pod dalam satu waktu). Ceph pod "
"lainnya sedang berjalan. Tidak ada gangguan pada pod OSH."
msgid ""
"``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` images are "
"used for jobs. ``ceph_mon_check`` has one script that is stable so no need "
"to upgrade."
msgstr ""
"Image ``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` "
"digunakan untuk pekerjaan. ``ceph_mon_check`` memiliki satu skrip yang "
"stabil sehingga tidak perlu melakukan upgrade."
msgid "``cp /tmp/ceph.yaml ceph-update.yaml``"
msgstr "``cp /tmp/ceph.yaml ceph-update.yaml``"
msgid "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``"
msgstr "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``"
msgid "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``"
msgstr "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``"
msgid "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``"
msgstr "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``"
msgid ""
"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update."
"yaml``"
msgstr ""
"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update."
"yaml``"
msgid "``series of console outputs:``"
msgstr "``series of console outputs:``"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,209 @@
# Soonyeul Park <ardentpark@gmail.com>, 2018. #zanata
msgid ""
msgstr ""
"Project-Id-Version: openstack-helm\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-09-29 05:49+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-09-29 01:07+0000\n"
"Last-Translator: Soonyeul Park <ardentpark@gmail.com>\n"
"Language-Team: Korean (South Korea)\n"
"Language: ko_KR\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=1; plural=0\n"
msgid "Backing up a PVC"
msgstr "PVC 백업"
msgid ""
"Backing up a PVC stored in Ceph, is fairly straigthforward, in this example "
"we use the PVC ``mysql-data-mariadb-server-0`` as an example, but this will "
"also apply to any other services using PVCs eg. RabbitMQ, Postgres."
msgstr ""
"Ceph에 저장된 PVC를 백업하는 것은 아주 간단합니다. 본 예시에서 PVC의 ``mysql-"
"data-mariadb-server-0`` 을 예로 들겠지만, 이는 또한 RabbitMQ, Postgres 와 같"
"은 다른 서비스에도 적용 가능할 것입니다."
msgid ""
"Before proceeding, it is important to ensure that you have deployed a client "
"key in the namespace you wish to fulfill ``PersistentVolumeClaims``. To "
"verify that your deployment namespace has a client key:"
msgstr ""
"진행하기 전에, ``PersistentVolumeClaims`` 를 만족하기를 원하는 네임스페이스"
"에 클라이언트 키를 배포했는지 보장하는 것이 중요합니다. 네임스페이스가 클라이"
"언트 키를 가지고 있는지에 대한 검증은 다음과 같습니다:"
msgid "Bugs and Feature requests"
msgstr "버그와 기능 요청"
msgid "Ceph"
msgstr "Ceph"
msgid "Ceph Deployment Status"
msgstr "Ceph 배포 상태"
msgid "Ceph Validating PVC Operation"
msgstr "Ceph의 PVC 작업 검증"
msgid "Ceph Validating StorageClass"
msgstr "Ceph의 StorageClass 검증"
msgid "Channels"
msgstr "채널"
msgid "Database Deployments"
msgstr "데이터베이스 배포"
msgid ""
"First, we want to validate that Ceph is working correctly. This can be done "
"with the following Ceph command:"
msgstr ""
"먼저, Ceph가 정확하게 작동하고 있는지 확인하고 싶습니다. 이는 다음과 같은 "
"Ceph 명령을 통해 수행될 수 있습니다:"
msgid "Galera Cluster"
msgstr "Galera 클러스터"
msgid "Getting help"
msgstr "도움말"
msgid "Installation"
msgstr "설치"
msgid ""
"Join us on `IRC <irc://chat.freenode.net:6697/openstack-helm>`_: #openstack-"
"helm on freenode"
msgstr ""
"`IRC <irc://chat.freenode.net:6697/openstack-helm>`_: freenode 내 "
"#openstack-helm 에 가입합니다."
msgid "Join us on `Slack <http://slack.k8s.io/>`_ - #openstack-helm"
msgstr "`Slack <http://slack.k8s.io/>`_ - #openstack-helm 에 가입합니다."
msgid ""
"Next we can look at the storage class, to make sure that it was created "
"correctly:"
msgstr ""
"다음으로 올바르게 생성되었는지를 확인하기 위해, 저장소 클래스를 살펴볼 수 있"
"습니다:"
msgid ""
"Note: This step is not relevant for PVCs within the same namespace Ceph was "
"deployed."
msgstr ""
"참고: 이 단계는 Ceph가 배포된 동일한 네임스페이스 내의 PVC와는 관련이 없습니"
"다."
msgid "Once this has been done the workload can be restarted."
msgstr "이 작업이 완료되면 워크로드를 재시작할 수 있습니다."
msgid "PVC Preliminary Validation"
msgstr "PVC 사전 검증"
msgid "Persistent Storage"
msgstr "Persistent 스토리지"
msgid ""
"Restoring is just as straightforward. Once the workload consuming the device "
"has been stopped, and the raw RBD device removed the following will import "
"the back up and create a device:"
msgstr ""
"복구도 마찬가지로 간단합니다. 장치를 소비하는 워크로드가 멈추고 raw RBD 장치"
"가 제거되면, 다음과 같이 백업을 가져와 장치를 생성할 것입니다:"
msgid ""
"Sometimes things go wrong. These guides will help you solve many common "
"issues with the following:"
msgstr ""
"가끔 이상이 발생할 때, 이 안내서가 다음과 같은 여러가지 일반적인 문제들을 해"
"결하는 데 도움이 될 것입니다."
msgid ""
"The parameters are what we're looking for here. If we see parameters passed "
"to the StorageClass correctly, we will see the ``ceph-mon.ceph.svc.cluster."
"local:6789`` hostname/port, things like ``userid``, and appropriate secrets "
"used for volume claims."
msgstr ""
"여기서 찾고있는 것은 매개 변수입니다. 매개 변수가 StorageClass를 올바르게 통"
"과했는지 확인했다면, ``userid``, 그리고 볼륨 클레임을 위해 사용되는 적절한 비"
"밀과 같은 ``ceph-mon.ceph.svc.cluster.local:6789`` hostname/port 를 확인할 "
"수 있습니다."
msgid ""
"This guide is to help users debug any general storage issues when deploying "
"Charts in this repository."
msgstr ""
"이 안내서는 사용자가 저장소에 차트를 배포할 때의 일반적인 저장소 문제들을 디"
"버그하는 것을 돕기 위한 것입니다."
msgid ""
"This guide is to help users debug any general storage issues when deploying "
"charts in this repository."
msgstr ""
"이 안내서는 사용자가 저장소에 차트를 배포할 때의 일반적인 저장소 문제들을 디"
"버그하는 것을 돕기 위한 것입니다."
msgid ""
"To deploy the HWE kernel, prior to deploying Kubernetes and OpenStack-Helm "
"the following commands should be run on each node:"
msgstr ""
"HWE 커널을 배포하려면, Kubernetes와 OpenStack-Helm을 배포하기 전에 각 노드에"
"서 다음과 같은 명령을 실행해야 합니다."
msgid ""
"To make use of CephFS in Ubuntu the HWE Kernel is required, until the issue "
"described `here <https://github.com/kubernetes-incubator/external-storage/"
"issues/345>`_ is fixed."
msgstr ""
"Ubuntu에서 CephFS를 사용하려면, `여기 <https://github.com/kubernetes-"
"incubator/external-storage/issues/345>`_ 에 기술된 문제가 해결될 때까지 HWE "
"커널이 필요합니다."
msgid "To test MariaDB, do the following:"
msgstr "MariaDB를 테스트하려면, 다음과 같이 수행하십시오:"
msgid ""
"To validate persistent volume claim (PVC) creation, we've placed a test "
"manifest `here <https://raw.githubusercontent.com/openstack/openstack-helm/"
"master/tests/pvc-test.yaml>`_. Deploy this manifest and verify the job "
"completes successfully."
msgstr ""
"persistent volume claim (PVC) 생성을 검증하기 위해서, `여기 <https://raw."
"githubusercontent.com/openstack/openstack-helm/master/tests/pvc-test.yaml>`_ "
"에 테스트 매니페스트를 배치하였습니다. 이 매니페스트를 배포하고 작업이 성공적"
"으로 완료되었는지 확인합니다."
msgid "Troubleshooting"
msgstr "트러블슈팅"
msgid "Ubuntu HWE Kernel"
msgstr "Ubuntu HWE 커널"
msgid ""
"Use one of your Ceph Monitors to check the status of the cluster. A couple "
"of things to note above; our health is `HEALTH\\_OK`, we have 3 mons, we've "
"established a quorum, and we can see that all of our OSDs are up and in the "
"OSD map."
msgstr ""
"Ceph 모니터 중 하나를 사용하여 클러스터의 상태를 점검하십시오. 위의 몇 가지 "
"사항을 참고했을 때; health는 `HEALTH\\_OK`, mon은 3개로 quorum을 설정하였고, "
"모든 OSD가 up되어 OSD 맵 안에 있음을 확인할 수 있습니다."
msgid ""
"When discovering a new bug, please report a bug in `Storyboard <https://"
"storyboard.openstack.org/#!/project/886>`_."
msgstr ""
"새로운 버그 발견 시, `Storyboard <https://storyboard.openstack.org/#!/"
"project/886>`_ 로 보고 부탁드립니다."
msgid ""
"Without this, your RBD-backed PVCs will never reach the ``Bound`` state. "
"For more information, see how to `activate namespace for ceph <../install/"
"multinode.html#activating-control-plane-namespace-for-ceph>`_."
msgstr ""
"이것 없이는, RBD-backed PVC 는 절대로 ``Bound`` 상태가 될 수 없을 것입니다. "
"더 많은 정보를 위해서는, 어떻게 `ceph를 위한 네임스페이스를 활성화 <../"
"install/multinode.html#activating-control-plane-namespace-for-ceph>`_ 하는지 "
"보십시오."