dea_next job from cf/227
The DEA is responsible for running customer applications and maintaining associated routes as live. It also periodically advertises its capacity to the Cloud Controllers and accepts requests to start and stop applications.
Github source:
07527d79
or
master branch
Properties¶
cc
¶
internal_api_password
¶Password to access internal endpoints
internal_api_user
¶User name to access internal endpoints
- Default
internal_user
srv_api_uri
¶API URI of cloud controller
dea_next
¶
advertise_interval_in_seconds
¶frequency of staging & DEA advertisments in seconds.
- Default
5
allow_host_access
¶Allows warden containers to access the DEA host via its IP
- Default
false
allow_networks
¶
crash_lifetime_secs
¶Crashed app lifetime in seconds
- Default
3600
default_health_check_timeout
¶Default timeout for application to start
- Default
60
deny_networks
¶
directory_server_protocol
¶The protocol to use when communicating with the directory server (“http” or “https”)
- Default
https
disk_mb
¶
- Default
32000
disk_overcommit_factor
¶
- Default
1
evacuation_bail_out_time_in_seconds
¶Duration to wait before shutting down, in seconds.
- Default
115
heartbeat_interval_in_seconds
¶frequency of heartbeats in seconds.
- Default
10
instance_bandwidth_limit
¶
burst
¶Network bandwidth burst limit for running instances in bytes
rate
¶Network bandwidth limit for running instances in bytes per second
instance_disk_inode_limit
¶Limit on inodes for an instance container
- Default
200000
instance_max_cpu_share_limit
¶The maximum number of CPU shares that can be given to an app
- Default
256
instance_memory_to_cpu_share_ratio
¶Controls the relationship between app memory and cpu shares. app_cpu_shares = app_memory / cpu_share_factor
- Default
8
instance_min_cpu_share_limit
¶The minimum number of CPU shares that can be given to an app
- Default
1
kernel_network_tuning_enabled
¶with latest kernel version, no kernel network tunings allowed with in warden cpi containers
- Default
true
logging_level
¶Log level for DEA.
- Default
debug
max_staging_duration
¶
- Default
900
memory_mb
¶
- Default
8000
memory_overcommit_factor
¶
- Default
1
mtu
¶Interface MTU size
- Default
1500
rlimit_core
¶Maximum size of core file in bytes. 0 represents no core dump files can be created, and -1 represents no size limits.
- Default
0
stacks
¶An array of stacks, specifying the name and package path.
- Default
- name: cflinuxfs2 package_path: /var/vcap/packages/rootfs_cflinuxfs2/rootfs
staging_bandwidth_limit
¶
burst
¶Network bandwidth burst limit for staging tasks in bytes
rate
¶Network bandwidth limit for staging tasks in bytes per second
staging_cpu_limit_shares
¶CPU limit in shares for staging tasks cgroup
- Default
512
staging_disk_inode_limit
¶Limit on inodes for a staging container
- Default
200000
staging_disk_limit_mb
¶Disk limit in mb for staging tasks
- Default
6144
staging_memory_limit_mb
¶Memory limit in mb for staging tasks
- Default
1024
streaming_timeout
¶
- Default
60
zone
¶The Availability Zone
- Default
default
disk_quota_enabled
¶
disk quota must be disabled to use warden-inside-warden with the warden cpi
- Default
true
domain
¶
DNS domain name for this Cloud Foundry deployment
metron_endpoint
¶
host
¶The host used to emit messages to the Metron agent
- Default
127.0.0.1
port
¶The port used to emit messages to the Metron agent
- Default
3457
nats
¶
machines
¶IP of each NATS cluster member.
password
¶password for NATS login
port
¶TCP port of NATS server
user
¶user name for NATS login
router
¶
requested_route_registration_interval_in_seconds
¶Interval at which the router requests routes to be registered.
- Default
20
Templates¶
Templates are rendered and placed onto corresponding
instances during the deployment process. This job's templates
will be placed into /var/vcap/jobs/dea_next/
directory
(learn more).
bin/dea_ctl
(fromdea_ctl.erb
)bin/dir_server_ctl
(fromdir_server_ctl.erb
)bin/drain
(fromdeterministic_drain.rb
)bin/warden_ctl
(fromwarden_ctl.erb
)config/dea.yml
(fromdea.yml.erb
)config/warden.yml
(fromwarden.yml.erb
)
Packages¶
Packages are compiled and placed onto corresponding
instances during the deployment process. Packages will be
placed into /var/vcap/packages/
directory.