hm9000 job from cf/211
The HM9000 periodically compares expected list of running applications as specified by the Cloud Controller against a list of actually running applications as reported by the DEAs. It tries to reconcile found differences.
Github source:
48c88357
or
master branch
Properties¶
cc
¶
bulk_api_password
¶Password used to access the bulk_api, health_manager uses it to connect to the cc, announced over NATS
bulk_api_user
¶User used to access the bulk_api, health_manager uses it to connect to the cc, announced over NATS
- Default
bulk_api
internal_api_password
¶Password for hm9000 API
internal_api_user
¶Username for hm9000 API
- Default
internal_user
srv_api_uri
¶
dea_next
¶
heartbeat_interval_in_seconds
¶Heartbeat interval for DEAs
domain
¶
domain where cloud_controller will listen (api.domain) often the same as the system domain
etcd
¶
machines
¶IPs pointing to the ETCD cluster
hm9000
¶
desired_state_batch_size
¶The batch size when fetching desired state information from the CC.
- Default
5000
fetcher_network_timeout_in_seconds
¶Each API call to the CC must succeed within this timeout.
- Default
30
url
¶URL that HM9000 will register with the gorouter
nats
¶
machines
¶
password
¶
port
¶
user
¶
networks
¶
apps
¶HM9000 network information.
ssl
¶
skip_cert_verify
¶when connecting over https, ignore bad ssl certificates
- Default
false
Templates¶
Templates are rendered and placed onto corresponding
instances during the deployment process. This job's templates
will be placed into /var/vcap/jobs/hm9000/
directory
(learn more).
bin/hm9000_analyzer_ctl
(fromhm9000_analyzer_ctl
)bin/hm9000_api_server_ctl
(fromhm9000_api_server_ctl
)bin/hm9000_evacuator_ctl
(fromhm9000_evacuator_ctl
)bin/hm9000_fetcher_ctl
(fromhm9000_fetcher_ctl
)bin/hm9000_listener_ctl
(fromhm9000_listener_ctl
)bin/hm9000_metrics_server_ctl
(fromhm9000_metrics_server_ctl
)bin/hm9000_sender_ctl
(fromhm9000_sender_ctl
)bin/hm9000_shredder_ctl
(fromhm9000_shredder_ctl
)config/hm9000.json
(fromhm9000.json.erb
)
Packages¶
Packages are compiled and placed onto corresponding
instances during the deployment process. Packages will be
placed into /var/vcap/packages/
directory.