parser job from logsearch/207.0.0
              Github source:
              70a733ee or
              master branch
            
Properties¶
logstash¶
  
  
    
heap_size¶sets jvm heap sized
log_level¶The default logging level (e.g. WARN, DEBUG, INFO)
- Default
  info
metadata_level¶Whether to include additional metadata throughout the event lifecycle. NONE = disabled, DEBUG = fully enabled
- Default
  NONE
logstash_parser¶
  
  
    
debug¶Debug level logging
- Default
  false
deployment_dictionary¶A list of files concatenated into one deployment dictionary file. Each file contains a hash of job name-deployment name keypairs for @source.deployment lookup.
- Default
  - /var/vcap/packages/logsearch-config/deployment_lookup.yml
elasticsearch¶
data_hosts¶The list of elasticsearch data node IPs
- Default
  - 127.0.0.1
document_id¶Use a specific, dynamic ID rather than an auto-generated identifier.
flush_size¶Controls how many logs will be buffered and sent to Elasticsearch for bulk indexing
- Default
  500
idle_flush_time¶How frequently to flush events if the output queue is not full.
index¶The specific, dynamic index name to write events to.
- Default
 logstash-%{+YYYY.MM.dd}
index_type¶The specific, dynamic index type name to write events to.
- Default
 '%{@type}'
routing¶The routing to be used when indexing a document.
enable_json_filter¶Toggles the if_it_looks_like_json.conf filter rule
- Default
  false
filters¶The configuration to embed into the logstash filters section. Can either be a set of parsing rules as a string or a list of hashes in the form of [{name: path_to_parsing_rules.conf}]
- Default
  ""
inputs¶A list of input plugins, with a hash of options for each of them. Please refer to example below.
- Default
 - options: {} plugin: redis- Example
 inputs: - options: host: 192.168.1.1 password: c1oudbunny user: logsearch plugin: rabbitmq
message_max_size¶Maximum log message length. Anything larger is truncated (TODO: move this to ingestor?)
- Default
  1.048576e+06
outputs¶A list of output plugins, with a hash of options for each of them. Please refer to example below.
- Default
 - options: {} plugin: elasticsearch- Example
 inputs: - options: collection: logs database: logsearch uri: 192.168.1.1 plugin: mongodb
timecop¶
reject_greater_than_hours¶Logs with timestamps greater than this many hours in the future won’t be parsed and will get tagged with fail/timecop
- Default
  1
reject_less_than_hours¶Logs with timestamps less than this many hours in the past won’t be parsed and will get tagged with fail/timecop
- Default
  24
wait_for_templates¶A list of index templates that need to be present in ElasticSearch before the process starts
- Default
  - index_template
workers¶The number of worker threads that logstash should use (default: auto = one per CPU)
- Default
  auto
redis¶
  
  
    
host¶Redis host of queue
key¶Name of queue to pull messages from
- Default
  logstash
port¶Redis port of queue
- Default
  6379
Templates¶
            Templates are rendered and placed onto corresponding
            instances during the deployment process. This job's templates
            will be placed into /var/vcap/jobs/parser/ directory
            (learn more).
          
bin/monit_debugger(frombin/monit_debugger)bin/parser_ctl(frombin/parser_ctl)config/filters_override.conf(fromconfig/filters_override.conf.erb)config/filters_post.conf(fromconfig/filters_post.conf.erb)config/filters_pre.conf(fromconfig/filters_pre.conf.erb)config/input_and_output.conf(fromconfig/input_and_output.conf.erb)config/log4j2.properties(fromconfig/log4j2.properties.erb)data/properties.sh(fromdata/properties.sh.erb)helpers/ctl_setup.sh(fromhelpers/ctl_setup.sh)helpers/ctl_utils.sh(fromhelpers/ctl_utils.sh)logsearch/logs.yml(fromlogsearch/logs.yml)
Packages¶
            Packages are compiled and placed onto corresponding
            instances during the deployment process. Packages will be
            placed into /var/vcap/packages/ directory.