Question GIT - Grafana InfluxDB Telegraf - DS918+ monitoring dashboard (and more to come)

Currently reading
Question GIT - Grafana InfluxDB Telegraf - DS918+ monitoring dashboard (and more to come)

162
45
NAS
DS918+ (8GB RAM, 4x WD RED 4TB SHR) ; EATON Ellipse PRO 1200FR
Operating system
  1. Windows
Mobile operating system
  1. Android
Hello all,
I have installed GIT yesterday to create a dashboard allowing me to see the status of my NAS but now that it's installed with a dashboard I found online (ID 9961) which relies on SNMP and works like it should (I think), I don't know how to fully customize what Grafana is showing me to see exactly what I want to see such as UpTime, Volumes occupation, Memory Usage (used, free), UPS status (load, battery charge, battery time based on load...), containers status etc ...
I have mostly followed @Rusty 's guide for the installation (with some adaptations such as putting all containers to a macvlan) but now I'm kind of stuck because I don't really know what to do next to have it show what I need/want to see.
Not sure also if it was a good idea to have the telegraf container on the macvlan instead of host network ... and not set pid as host ...
Any help/guidance would be appreciated since it's the 1st time for me using something like GIT.

Thanks in advance
1591377267245.png
 
Bottom line the idea now is to "uncomment" sections in your telegraf config file to start picking up information from your NAS and its HW and sensors.

So just go to [[input.xxxxx]], remove the comment from that section and then restart telegraf container. After that Influx will start gathering data and following that, you can start using that data inside your custom dashboard (or the one you have downloaded from the repository).
 
Well, that's something I already figured out by adding the SNMP collection part and pointing to the syno community and OIDs (template was provided with the dashboard but changed the content a bit to add discriptions).
The point is I'm not sure how to build the requests and select the tables, fields, contents.

Last thing, what is the pid=host parameter used for exactly in the telegraf creation command ?
 
Hello @Rusty , I checked again your blog and noticed under your review for Grafana 7 (which happens to be the version I'm currently using) that your screen captures show dashboards for monitoring a DS918+ which seems to suit exactly what I want to see on my dashboard and also looks really great.
Would you mind sharing how you set this one up so I can have a nice good looking dashboard like yours ?
That would be great because so far I managed to tweak the one I have just a little bit but I can't seem to get some of the info to show as clearly as you did in your dashboard.
Your dashboard shown on your blog :
grafana2.png

My dashboard :
grafana 1 2020-06-06 192756.png

grafana 2 2020-06-06 192840.png

grafana 3 2020-06-06 192911.png

Hope you can help. Thanks.
 
@Shoop here is a JSON paste of my 918 board


Just make a new dashboard from this JSON and then change all the parameters in each panel to fit your NAS name as well as LAN adapters, LAGs (if you have them) and unless you are running a VDSM instance on it as well, you can remove the panel for that network section.
 
Last edited:
Thanks a lot @Rusty , I managed to import your dashboard and modified it a bit to suit my configuration though there are some things I don't understand.
Below the list of the things that are still bothering me :
- I did not manage to get "HDD used %" to show anything else than Volume3 used % but I still have some ideas so I'm going to try
- the LAN metrics don't seem to be accurate. I have tried various things but still seems to be wrong I sometimes get negative values for either in or out bytes (read from ifTable)
- I'm also not sure regarding the R/W Mbps metrics but still trying to figure it out
- I still can't figure out how to also show UPS infos even though I have added the lines in my telegraf.conf not sure if everythinjg is right as the code creates tables fields but never creates the table ... when I try to create a panel to search for these info, none of the fields listed belowis showing ...
Code:
    #
    # For those with a UPS attached to their Synology (some UPSs do not report all values)
    #

    # UPS model as reported to the NAS 
    [[inputs.snmp.table.field]]
        name = "upsModel"
        oid = "SYNOLOGY-UPS-MIB::upsDeviceModel"

    # UPS status as reported to the NAS 
    [[inputs.snmp.table.field]]
        name = "upsStatus"
        oid = "SYNOLOGY-UPS-MIB::upsInfoStatus"

    # UPS load as reported to the NAS 
    [[inputs.snmp.table.field]]
        name = "upsLoad"
        oid = "SYNOLOGY-UPS-MIB::upsInfoLoadValue"
      
    # UPS real power value as reported to the NAS 
    [[inputs.snmp.table.field]]
        name = "upsRealPower"
        oid = "SYNOLOGY-UPS-MIB::upsInfoRealPowerValue"

    # UPS battery charge value as reported to the NAS 
    [[inputs.snmp.table.field]]
        name = "upsCharge"
        oid = "SYNOLOGY-UPS-MIB::upsBatteryChargeValue"

    # UPS battery runtime as reported to the NAS 
    [[inputs.snmp.table.field]]
        name = "upsRuntime"
        oid = "SYNOLOGY-UPS-MIB::upsBatteryRuntimeValue"

    # UPS Battery Charge Warning 
    [[inputs.snmp.table.field]]
        name = "upsWarning"
        oid = "SYNOLOGY-UPS-MIB::upsBatteryChargeWarning"

Edit : add some more things not working
 
Last edited:
Well well well ...
I managed to build a working dashboard (see screenshots) but now I stumbled across a full influxdata solution (it seems) : TICK
@Rusty , since you seem to be interested in this kind of graph, monitoring, dashboard thing, I was wondering if you had tried it already ?
1605209809935.png


1605209873175.png


1605209905026.png


1605209954660.png


2020-11-12 204001.png
 

Attachments

  • 1605210123054.png
    1605210123054.png
    92.3 KB · Views: 75
Hi, @Rusty would you mind sharing what you've uncommented in your telegraf.conf for your 918+ please?
I am obviously getting it completely wrong as I can't seem to get any information displayed at all.
 
Hi, @Rusty would you mind sharing what you've uncommented in your telegraf.conf for your 918+ please?
I am obviously getting it completely wrong as I can't seem to get any information displayed at all.
Sorry for the late reply.

Here is what I have for Global settings:

Code:
# Global tags can be specified here in key="value" format.
[global_tags]
  # dc = "us-east-1" # will tag all metrics with dc=us-east-1
  # rack = "1a"
  ## Environment variables can be used as tags, and throughout the config file
  # user = "$USER"


# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
  ## output, and will flush this buffer on a successful write. Oldest metrics
  ## are dropped first when this buffer fills.
  ## This buffer only fills when writes fail to output plugin(s).
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. You shouldn't set this below
  ## interval. Maximum flush_interval will be flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Logging configuration:
  ## Run telegraf with debug log messages.
  debug = false
  ## Run telegraf in quiet mode (error log messages only).
  quiet = false
  ## Specify the log file name. The empty string means to log to stderr.
  logfile = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false

This is for output plugin section

Code:
###############################################################################
#                            OUTPUT PLUGINS                                   #
###############################################################################

# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
  ## The full HTTP or UDP URL for your InfluxDB instance.
  ##
  ## Multiple urls can be specified as part of the same cluster,
  ## this means that only ONE of the urls will be written to each interval.
  # urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
  urls = ["http://NASIPADDRESS:8086"] # required
  ## The target database for metrics (telegraf will create it if not exists).
  database = "telegraf_DB_NAME" # required

  ## Name of existing retention policy to write to.  Empty string writes to
  ## the default retention policy.
  retention_policy = ""
  ## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
  write_consistency = "any"

  ## Write timeout (for the InfluxDB client), formatted as a string.
  ## If not provided, will default to 5s. 0s means no timeout (not recommended).
  timeout = "5s"

Finally, under input plugins, I have this:

Code:
###############################################################################
#                            INPUT PLUGINS                                    #
###############################################################################

# Read metrics about cpu usage
[[inputs.cpu]]
  ## Whether to report per-cpu stats or not
  percpu = true
  ## Whether to report total system cpu stats or not
  totalcpu = true
  ## If true, collect raw CPU time metrics.
  collect_cpu_time = false
  ## If true, compute and report the sum of all non-idle CPU states.
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
# Get kernel statistics from /proc/stat
[[inputs.kernel]]
  # no configuration
# Read metrics about memory usage
[[inputs.mem]]
  # no configuration
# Get the number of processes and group them by status
[[inputs.processes]]
  # no configuration
# Read metrics about swap memory usage
[[inputs.swap]]
  # no configuration
# Read metrics about system load & uptime
[[inputs.system]]
  # no configuration
[[inputs.docker]]
#   ## Only collect metrics for these containers, collect all if empty
   container_names = []
#
#   ## Containers to include and exclude. Globs accepted.
#   ## Note that an empty array for both will include all containers
   container_name_include = []
   docker_label_include = []
[[inputs.net]]

That's it
 
Last edited:
When I try to view your dashboard I get essentially a blank dashboard.
When trying to look at the information in the database using explore I have a load of strings for the host.

Hmm..... Digging back through blackvoid to see if I can find your guide and where I've gone wrong.

Nope, all as per your guide (I think).
Code:
version: "3"
services:
  influxdb:
    image: influxdb
    container_name: influxdb
    restart: always
    environment:
      - PUID=1041
      - PGID=100
      - TZ=Europe/London
    ports:
      - 8083:8083
      - 8086:8086
    volumes:
      - /volume2/docker/influxdb/conf/influxdb.conf:/etc/influxdb/influxdb.conf:ro
      - /volume2/docker/influxdb/db:/var/lib/influxdb
      - INFLUXDB_DB=telegraf
      - INFLUXDB_USER=telegraf
      - INFLUXDB_ADMIN_ENABLED=true
      - INFLUXDB_ADMIN_USER=telegraf
      - INFLUXDB_ADMIN_PASSWORD=metricsmetricsmetricsmetrics 
    networks:
      - synology

  telegraf:
    image: telegraf
    container_name: telegraf
    restart: always
    extra_hosts:
     - "influxdb:192.168.0.150"
    ports:
     - 8092:8092
     - 8094:8094
     - 8125:8125
    environment:
     - HOST_PROC:/rootfs/proc
     - HOST_SYS:/rootfs/sys
     - HOST_ETC:/rootfs/etc
     - PID=host  
    volumes:
     - /volume2/docker/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
#     - /usr/share/snmp/mibs:/usr/share/snmp/mibs
     - /var/run/docker.sock:/var/run/docker.sock:ro
     - /sys:/rootfs/sys:ro
     - /proc:/rootfs/proc:ro
     - /etc:/rootfs/etc:ro
    networks:
      - synology

  grafana:
    image: grafana/grafana
    container_name: grafana
    restart: always
    ports:
      - 3010:3000
    networks:
      - synology
    volumes:
      - /volume2/docker/grafana:/var/lib/grafana
    environment:
      - PID=1041
      - GID=100


networks:
  synology:
    external: true

telegraf.conf
Code:
[global_tags]

[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  hostname = ""
  omit_hostname = false
  
[[outputs.influxdb]]
  urls = ["http://192.168.0.150:8086"]
  database = "telegraf"
  skip_database_creation = true
  retention_policy = ""
  timeout = "5s"
  username = "telegraf"
  password = "metricsmetricsmetricsmetrics"
  
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false

[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

[[inputs.diskio]]

[[inputs.kernel]]

[[inputs.mem]]

[[inputs.processes]]

[[inputs.swap]]

[[inputs.system]]

[[inputs.docker]]
#  endpoint = "unix:///var/run/docker.sock"
  container_names = []
   container_name_include = []
   docker_label_include = []
[[inputs.net]]
  
[[inputs.net_response]]
   protocol = "tcp"
   address = "https://<some_url_to_monitor>"
   
    [[inputs.snmp]]
    # List of agents to poll
    agents = ["192.168.0.150"] #:161" ] # this will be the IP address of your NAS
       
    # Polling interval
    interval = "60s"
    
    # Timeout for each SNMP query.
    timeout = "10s"
    
    # Number of retries to attempt within timeout.
    retries = 3
    
    # SNMP version
    version = 2
    
    # SNMP community string.
    community = "public"
    
    # The GETBULK max-repetitions parameter
    max_repetitions = 30
    
    # Measurement name
    name = "snmp.synology"
    
    #
    # Generic SNMP information
    #
    
    #  System name (hostname)
    [[inputs.snmp.field]]
        is_tag = true
        name = "sysName"
        oid = "RFC1213-MIB::sysName.0"
    
    #  System vendor OID
    [[inputs.snmp.field]]
        name = "sysObjectID"
        oid = "RFC1213-MIB::sysObjectID.0"
    
    #  System description
    [[inputs.snmp.field]]
        name = "sysDescr"
        oid = "RFC1213-MIB::sysDescr.0"
    
    #  System contact info
    [[inputs.snmp.field]]
        name = "sysContact"
        oid = "RFC1213-MIB::sysContact.0"
    
    #  System location info
    [[inputs.snmp.field]]
        name = "sysLocation"
        oid = "RFC1213-MIB::sysLocation.0"
    
    #  System uptime
    [[inputs.snmp.field]]
        name = "sysUpTime"
        oid = "RFC1213-MIB::sysUpTime.0"
    
    # System interface table (network)
    [[inputs.snmp.table]]
        oid = "IF-MIB::ifTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "IF-MIB::ifDescr"
    
    #
    # Synology Storage Specific
    #
    
    [[inputs.snmp.table]]
        oid = "SYNOLOGY-DISK-MIB::diskTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "SYNOLOGY-DISK-MIB::diskID"
    # Synology disk table
    
    [[inputs.snmp.table]]
        oid = "SYNOLOGY-RAID-MIB::raidTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "SYNOLOGY-RAID-MIB::raidName"
    # Synology RAID table
    
    [[inputs.snmp.table]]
        oid = "SYNOLOGY-SERVICES-MIB::serviceTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "SYNOLOGY-SERVICES-MIB::serviceEntry"
    # Synology services table
    
    [[inputs.snmp.table]]
        oid = "SYNOLOGY-SMART-MIB::diskSMARTTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "SYNOLOGY-SMART-MIB::diskSMARTEntry"
    # Synology SMART table
    
    [[inputs.snmp.table]]
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIOTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "SYNOLOGY-STORAGEIO-MIB::storageIOTable"
    # Synology Storage I/O table
    
    [[inputs.snmp.table]]
        oid = "SYNOLOGY-SPACEIO-MIB::spaceIOTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "SYNOLOGY-SPACEIO-MIB::spaceIOTable"
    # Synology Space I/O table
    
    #
    # Synology NAS services
    #
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersCIFS"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "1"
    # CIFS users table (i.e. # of users connected via CIFS)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersAFP"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "2"
    # AFP users table (i.e. # of users connected via AFP)
    
     [[inputs.snmp.table.field]]
        name = "serviceUsersNFS"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "3"
    # NFS users table (i.e. # of users connected via AFP)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersFTP"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "4"
    # FTP users table (i.e. # of users connected via FTP)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersSFTP"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "5"
    # SFTP users table (i.e. # of users connected via SFTP)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersHTTP"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "6"
    # HTTP users table (i.e. # of users connected via HTTP)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersTELNET"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "7"
    # Telnet users table (i.e. # of users connected via Telnet)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersSSH"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "8"
    # SSH users table (i.e. # of users connected via SSH)
    
    [[inputs.snmp.table.field]]
        name = "serviceUsersOTHER"
        oid = "SYNOLOGY-SERVICES-MIB::serviceUsers"
        oid_index_suffix = "9"
    # Other users table (i.e. users connected to a service not listed above)
    
    #
    # For those with a UPS attached to their Synology (some UPSs do not report all values)
    #
    
    [[inputs.snmp.table.field]]
        name = "upsStatus"
        oid = "SYNOLOGY-UPS-MIB::upsInfoStatus"
    # UPS status as reported to the NAS
    
    [[inputs.snmp.table.field]]
        name = "upsLoad"
        oid = "SYNOLOGY-UPS-MIB::upsInfoLoadValue"
    # UPS load as reported to the NAS
    
    [[inputs.snmp.table.field]]
        name = "upsCharge"
        oid = "SYNOLOGY-UPS-MIB::upsBatteryChargeValue"
    # UPS battery charge value as reported to the NAS
    
    [[inputs.snmp.table.field]]
        name = "upsWarning"
        oid = "SYNOLOGY-UPS-MIB::upsBatteryChargeWarning"
    # UPS Battery Charge Warning
    
    #
    # physical drive telemetry - modify to match your physical drive configuration
    #
    
    [[inputs.snmp.field]]
        name = "phyDisk1Name"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIODevice.1"
    #  Disk 1 name
    [[inputs.snmp.field]]
        name = "phyDisk1storageIOLA"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIOLA.1"
    #  load of disk 1 (%)
    
    [[inputs.snmp.field]]
        name = "phyDisk2Name"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIODevice.2"
    #  Disk 2 name
    [[inputs.snmp.field]]
        name = "phyDisk2storageIOLA"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIOLA.2"
    #  load of disk 2 (%)
    
    [[inputs.snmp.field]]
        name = "phyDisk3Name"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIODevice.3"
    #  Disk 3 name
    [[inputs.snmp.field]]
        name = "phyDisk3storageIOLA"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIOLA.3"
    #  load of disk 3 (%)
    
    [[inputs.snmp.field]]
        name = "phyDisk4Name"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIODevice.4"
    #  Disk 4 name
     [[inputs.snmp.field]]
        name = "phyDisk4storageIOLA"
        oid = "SYNOLOGY-STORAGEIO-MIB::storageIOLA.4"
    #  load of disk 4 (%)
    
    #
    # Generic volume, CPU and memory telemetry
    #
    
    [[inputs.snmp.table]]
        oid = "HOST-RESOURCES-MIB::hrStorageTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "HOST-RESOURCES-MIB::hrStorageDescr"
    # System volume table
    
    [[inputs.snmp.field]]
        name = "ssCpuUser"
        oid = ".1.3.6.1.4.1.2021.11.9.0"
    # % of time Synology CPU is spending procrssing userland code
    
    [[inputs.snmp.field]]
        name = "ssCpuSystem"
        oid = ".1.3.6.1.4.1.2021.11.10.0"
    # % of time Synology CPU is spending processing system-level code
    
    [[inputs.snmp.field]]
        name = "ssCpuIdle"
        oid = ".1.3.6.1.4.1.2021.11.11.0"
    # % of time Synology CPU is idle
    
    [[inputs.snmp.table]]
        oid = "UCD-SNMP-MIB::laTable"
        [[inputs.snmp.table.field]]
            is_tag = true
            oid = "UCD-SNMP-MIB::laNames"
    # System load table
    
    [[inputs.snmp.field]]
        name = "memTotalSwap"
        oid = "UCD-SNMP-MIB::memTotalSwap.0"
    # Synology total swap memory
    
    [[inputs.snmp.field]]
        name = "memAvailSwap"
        oid = "UCD-SNMP-MIB::memAvailSwap.0"
    # Synology available swap memory
    
    [[inputs.snmp.field]]
        name = "memTotalReal"
        oid = "UCD-SNMP-MIB::memTotalReal.0"
    # Synology total real memory
    
    [[inputs.snmp.field]]
        name = "memAvailReal"
        oid = "UCD-SNMP-MIB::memAvailReal.0"
    # Synology available real memory
    
    [[inputs.snmp.field]]
        name = "memTotalFree"
        oid = "UCD-SNMP-MIB::memTotalFree.0"
    # Synology total free memory
    
    #
    # Synology-specific system telemetry
    #
    
    [[inputs.snmp.field]]
        name = "systemStatus"
        oid = "SYNOLOGY-SYSTEM-MIB::systemStatus.0"
    # Overall system status
    
    [[inputs.snmp.field]]
        name = "temperature"
        oid = "SYNOLOGY-SYSTEM-MIB::temperature.0"
    # Synology unit tempurature (drive temps are in SYNOLOGY-DISK-MIB)
    
    [[inputs.snmp.field]]
        name = "powerStatus"
        oid = "SYNOLOGY-SYSTEM-MIB::powerStatus.0"
    # Synology power status
    
    [[inputs.snmp.field]]
        name = "systemFanStatus"
        oid = "SYNOLOGY-SYSTEM-MIB::systemFanStatus.0"
    # Synology fan status
    
    [[inputs.snmp.field]]
        name = "cpuFanStatus"
        oid = "SYNOLOGY-SYSTEM-MIB::cpuFanStatus.0"
    # status of Synology's system fan
    
    [[inputs.snmp.field]]
        name = "modelName"
        oid = "SYNOLOGY-SYSTEM-MIB::modelName.0"
    # model name of the Synology device
    
    [[inputs.snmp.field]]
        name = "serialNumber"
        oid = "SYNOLOGY-SYSTEM-MIB::serialNumber.0"
    #  serial number of Synology device
    
    [[inputs.snmp.field]]
        name = "version"
        oid = "SYNOLOGY-SYSTEM-MIB::version.0"
    # DSM version that Synology is using
    
    [[inputs.snmp.field]]
        name = "upgradeAvailable"
        oid = "SYNOLOGY-SYSTEM-MIB::upgradeAvailable.0"
    # Indicates if a new version of DSM is savailable to install
Obviously passwords changed
I also don't appear to be getting anything back from the MIB files...
 
Last edited:
@Rusty Ok, this may well be where I'm going wrong....
Default install from your json:
1612277968243.png

Editing CPU Utilization:
1612278018031.png

Change to Synology (what I've named that datasource and it is also set to default)
1612278177262.png


Don't quite understand where I'm going wrong. :(
I know it's me being stupid.

I know I'm getting data into the database as a docker overview gives me information:
1612278390909.png
 
I've also just checked and I'm able to query snmp using snmp tester and can get outputs from that. But nothing in grafana. :(
Also when I manage to get it to see the network interfaces (non snmp as I can't get any informaton from that) I can only see eth0 and not eth1. :(
 
@Akira , try removing the host condition in your query.
Mine lookslike this :

SELECT "usage_iowait" AS "IOWait", "usage_system" AS "System", "usage_user" AS "User", "usage_nice" AS "Nice" FROM "cpu" WHERE ("cpu" = 'cpu-total') AND $timeFilter
 
@Shoop That now gives me a 2 line output :) Which is definitely a step forward, thank you. :)
But how to apply that to the rest to get the whole dashboard working?
Argh.. Being nagged to do some preparation work for dinner.... Same thing happened yesterday when I was configuring reverse proxying. Well back to this again later, thank you for your help @Rusty & @Shoop
 
There is the query I have for my CPU Utilization :

1612286756898.png


Don't be afraid with the values :D
Moments is currently re-indexing 61k photos and also processing some face recognition which is eating my CPU
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

:) That was indeed the case, however it is still only showing eth0 (Both eth0 and eth1 are definitely up)...
Replies
6
Views
2,396
Maybe this article will help you get started - Grafana 8 - InfluxDB 2 - Telegraf - 2021 monitoring stack
Replies
1
Views
4,789

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top