News:

We really need your input in this questionnaire

Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Tursiops

#31
If I understand that correctly, the Windows server and the NetXMS server are on different networks.
How is the NetXMS server connecting to the Windows server for SNMP polls?
Is there a route to allow direct SNMP polls, a port forward through a router/firewall or are you using a proxy?
Is SNMP Polling enabled for the Windows server? Is the SNMP service running and configured to allow the polls (I believe by default the service will only allow queries from 127.0.0.1)? Does a Configuration Poll confirm that the NetXMS server can poll SNMP on the server at all?
#32
General Support / Re: Netxms core do not start
September 03, 2019, 01:06:54 AM
Your database is locked. Presumably from when you had NetXMS running on WiFi with your system having the IP 192.168.134.1.
Hence NetXMS works when you run it on WiFi (same IP that the lock belongs to) and not on your other connection (different IP than the lock).

To unlock your database ensure your NetXMS service is stopped and run
nxdbmgr unlock
#33
As per http://www.cplusplus.com/reference/ctime/tm/, I believe months are "months since January" and therefore go from 0-11, not 1-12.
#34
General Support / Re: Timescale DB
August 25, 2019, 03:04:26 AM
Quote from: idaham on August 24, 2019, 05:09:30 PM
Quote from: Tursiops on July 05, 2019, 01:27:59 AM
The NetXMS database schema is different for TimescaleDB (even from Postgres -> TimescaleDB), so you'd have to use nxdbmgr for the migration. It's meant to support migration to TimescaleDB from 2.2.14 onwards (haven't tried it yet, we're still running load tests and comparisons between TimescaleDB and Postgres, feeding 50-60GB of live data into each system per day to compare performance).
One thing to note on TimescaleDB use, from what I can tell NetXMS configures the Hypertables with a 1 week chunk interval by default. Depending on your data ingestion rate, you may want to change that before you go live (e.g. for our test, I changed it to 1 day).

When feeding 50-60GB of data per day , that represent how many devices/instance ?


That was for testing purposes only. I configured a number of firewalls to generate verbose syslog messages on everything and had them send those to the test system, in addition to creating a range of "poll every 5 seconds" DCIs.
So it was live data, but not something I'd for a live system. Can't really compare that to a number of nodes.

As Victor said, TimescaleDB support is much improved in version 3. I'd suggest waiting for the release or upgrading to version 3 (beta version in 'unstable' repository) if you intend to use it now.
#35
General Support / Re: cURL command from server
August 24, 2019, 02:51:30 PM
You can create an Action on your server which can call a command on your server (or a remote node). The action itself can be triggered through an Event Processing Policy for the Event in question.
The commands on an agent can be defined the "Action" parameter in the agent configuration file. You can pass parameters to those actions as well.
I assume that's what you're looking for?
#36
You might want to add some memory to your system.
We have ~ 5000 nodes, 35k objects and 320k DCIs and our NetXMS server alone happily grabs ~ 50-60GB of memory.
At present we're running this on a single server with 160 GB of RAM. Note that this was added about a year ago when we hit some odd memory issues which turned out to be caused by a bug which gobbled up all available memory mappings over time, so it always looked like our system ran out of memory... we added a crazy amount to confirm it wasn't that. Having said that, our Postgres also happily eats up 64GB of memory. Could I cut our server down to ~ 128GB? Probably. Would it run on 12GB? I wouldn't think so.

I am pretty sure the recommendation by the Postgres guys is to set shared buffers to ~25% of your RAM (and that there's generally no point going over 40%).
There are several other configuration items we have in our system (e.g. synchronous_commit=off, wal_level=minimal, significantly higher effective_io_concurrency than default, but this does depend on your disk subsystem). However, as I am not a Postgres guy and of course didn't add notes to the config as to why I changed values :-[ and we added things at times when NetXMS was running significantly less stable for an install size such as ours, I am not sure which of these would actually still be relevant or are in fact counterproductive.
#37
The following won't help with your performance issue, but should explain the 2 billion events.

The 2 billion events would be the total number of events resetting to 0. In version 3 you can open the raw dci value graph to confirm that. So it would basically be the value of all events that the system had processed since last startup.
I see that with a few counters like that. In version 2 I used to add some code to just drop values that seem crazy high. In version 3 I'm wondering if I can use the raw dci data to detect a decrease in value, at which point I'd simply replace the current simple delta with the current raw dci value (as going down on a total value counter should only happen on a reset to 0). Similar with average delta per minute, except I'd need to take the timestamp into account as well.
#38
The following thread might be helpful: https://www.netxms.org/forum/general-support/deleting-objects-interfaces/
Basically, add the relevant code into your Hook::CreateInterface script. You should be able to return false on interfaces that don't match the type you're looking for.
#39
To stop the event from coming in in the first place, you would have to reconfigure your Ciscos. I'm guessing no snmp-server enable traps mac-notification should do it.
Otherwise, as long as the traps come in, NetXMS will have to process "something"?
#40
General Support / Re: Deleting Objects interfaces
August 18, 2019, 01:52:37 PM
You can right-click on any node to execute a server script (I'm assuming that's what you mean as debug console?) and just write your code in there. Your code wouldn't be node specific (if you are indeed enumerating all nodes and then all interfaces per node) so it doesn't matter against which node you run it. Or you could write it in the script library and then select it from the drop down. Or you could schedule a task that runs a script from the script library.
#41
General Support / Re: Deleting Objects interfaces
August 18, 2019, 07:12:33 AM
You could write a script to go through all devices, enumerate the interfaces and utilise the DeleteObject function in NXSL to remove the interfaces. Or you could add the interface enumeration and delete to the configuration poll hook.
#42
drop_chunks requires TIMESTAMP, TIMESTAMPTZ or DATE if the cut-off point is set as an interval.
If the cut-off point is given explicitly, you can use integers, see https://docs.timescale.com/latest/api#drop_chunks
As NetXMS uses integers in its current cleanup queries, I would assume it will use explicit cut-off points for drop_chunks as well.
Note that I'm not a dev and haven't looked at the code...
#43
Indeed "it depends". In fact it depends very heavily on which part you are trying to multi-tenant.
We are monitoring > 100 SMB networks in our installation, however we do not give our customers access to it at this time.

Templates & Policies (same thing in version 3)
We wouldn't customers access. A customer writing an auto-bind rule for "their" equipment would also apply to other customers. You would have to trust the customer to add checks for their zone (or custom attributes or whatever else you wish to use to identify a customer) into their templates. I do not trust. :)
If you are managing this, there is probably no reason not to apply the same templates to everyone. You can always use auto-apply rules for customer specific templates.

Mapping customers/nodes to containers
Auto-bind rules allow for that. But again, I wouldn't give customers write access to that as I do not trust the customer to always do the right thing here.

Event Policy Rules
And again, I wouldn't give customers access. There are no access controls to individual rules, you either can access EPRs or you can't. You do not want your customers to be able to mess with that. If the idea is that you're handling all of the rules, it is much easier. You can apply rules based on source objects, e.g. Zones. Make sure you get your naming conventions right so you can easily identify which rules apply to whom. You can configure customer specific settings as custom attributes on zones, containers or nodes. Or use persistent storage variables. You can script this to fit your needs.

Event Strategy
You can use the same events across your customers and apply different rules in EPRs depending on the customer. You can also create customer specific events if needed for customer specific templates. Once again, I wouldn't let customers create events.

Action Strategy
You might sense a pattern here.... I wouldn't let customers configure those, but you can set them up to deal with different customers yourself.

Basically NetXMS does not lend itself to full multi-tenancy where you can give actual administrative control over a zone to a different admin with them only being able to change their own templates, policies, rules, etc.

You can use it to give customers some level of access to their own devices. This is something you should test thoroughly. You do not want to have some area where one customer can pull data from another, e.g. via scripting or other means. You will need to setup trusted nodes, rather than assuming trust between everything. That also means newly discovered devices will need to be configured with trusted nodes. I haven't checked if that can be done via NXSL as discovered nodes are being created.

For logging, you need to enable ExtendedLogQueryAccessControl (disabled by default). It will make queries for logs slower, but it means customers won't be able to see each other's logs. However, we found a little while ago that "unknown" sources will be visible to everyone. That has been raised with the developers. Having said that, enabling UseSyslogForDiscovery and UseSNMPTrapsForDiscovery might be a workaround as it would create a new node from unknown sources. From that point onwards access restrictions would apply.

As we have not actually configured our system for this, I do not have any actual experience with that kind of setup. We simply needed a locked down temporary account one day for one particular customer and the above was what we found while doing so.
#44
For device specific monitoring, unless someone already posted templates in https://www.netxms.org/forum/general-support/sharing-standard-templates-for-netxms/ (I am pretty sure there are Microtik templates in there), you will need to build your own templates (which you can then auto-apply to all your devices). Installing the MIBs in NetXMS will make that process a lot easier, as you will be able to browse the devices for the information you are looking for.

Once you developed your templates and if you are willing to share them, please post them in the thread linked above to allow other users with such devices to benefit as well.
#45
The DCIs marked in the attached image are Instance DCIs. These are in your template and assigned to each node that has the template assigned. They do not collect data themselves.
They are used to create additional DCIs on the nodes via the Instance Discovery process. These newly created DCIs are specific to the node.
In short, what you are seeing is perfectly normal.
I recommend reading up on Instance Discovery: https://www.netxms.org/documentation/adminguide/data-collection.html#instance