Everything you always wanted to know about DAM but were afraid to ask

#1 – What exactly the DAM is?

You can find many DAM definitions and be a little bit confused about dozens different features mentioned there but some of them is always indicated and can be considered as key requirements (DAM sensu stricto):

  • 100% visibility of the access to data
  • monitoring completely independent of database administrators
  • analysis made on SQL level
  • real time and correlated incident identification
  • audit of events related with incidents
  • support of forensic analysis

Some other features are not native for DAM but its popularity is now widely recognized as a DAM (DAM sensu lato):

  • access blocking (this feature is generally part of DAMP – Database Activity Monitoring & Protection known also as DBF – Database Firewall)
  • database user authorizations reporting
  • sensitive data identification
  • dynamic data masking (on database level)
  • vulnerability management (whatever does it mean for requestor 😉 )

We can also identify some non-functional requirements related for any security solution:

  • minimal influence on performance the monitored system
  • support the heterogeneous database environment
  • support for the enterprises

It is very difficult to compare solutions. Be sure that you compare “apples” to “apples” instead of “apples” to ” pears”. Very often the requested DAM feature works on different layer and it is covered by other solution (WAF, IPS, NG-Firewall, CM management).
Ask rather for solution support of your case and requirements than for the list the functions included in the vendor box.

#2 – Agent-base or Agent-less monitoring?

In case of DAM the answer on this question can be only one. 100% data traffic visibility is not possible if we will base on network sniffer (agent-less) because you are not able to monitor local sessions.

How your database is accessed:

  • remotely (TCP, network pipes, encrypted connection)
  • locally (TCP, shared memory, network pipes)

Only agent resided on managed environment can see local session and non-tcp protocols. It is hard to start up the polemics with this obvious statement. However some remarks are important:

  • agent installed on monitored system has affect on it – but the question is about acceptable level of this performance influence and not about choice between agent-base and agent-less architecture
  • agent requires updates, reconfiguration, database and system restarts – it can be true for particular solution but is false in case of Guardium

Only the agent-base monitoring ensures the DAM requirements coverage. Check your platform and protocols supportability. Check performance overload on your database.

Even you will be able to disable any local access to database you still assume that your network configuration is stable and all session are visible for sniffer what is not true at all.

#3 – Does your DAM prevent SQL Injection?

I love this stuff. This question is completely unrelated to SQL level, it is question about protection of web application.
If you would like to stop SQL Injection attacks the solution is easy – use WAF or IPS/NG Firewall. These types of solution work on network layer and are able to HTTP/S data de-encapsulation, parsing and identification of dangerous content (injected SQL string or its meta-form).

It is clinical example how use the one common known word in the name leads to misunderstanding the clue of the problem and its resolution.

SQL Injection must be analysed on HTTP/S layer. It has not related to DAM protection.

If your WAF or IPS will not able block the attack, the DAM will be still able to analyse the SQL syntax, session context and data reference. It is normal DAM task and should not be mistaken with SQL injection protection.

#4 – Can we build the Virtual Patch protection with DAM?

In many parts the answer is similar to SQL injection case but I will describe it deeper.

VP is a security approach to create protection outside vulnerable system. Some examples:

  • system can be exploited but patch does not exist or it cannot be installed
  • vulnerable functionality has to be available for particular subject only
  • service has low reputation and whitelisting for activity required

There is many possibilities where DAM can provide VP protection:

  • blocking access to vulnerable store procedure
  • restrict access only from defined clients
  • acceptance only defined list of SQL’s and operations on object

but if vulnerable element resides on database we need to consider situation that exploitation can lead to uncover other vector of attack. That is why VP should be defined on network layer using IPS and NG-firewall primarily.

DAM can act as an auxiliary in building VP. Network in-line protection should be considered mainly

#5 – What is your DAM data collection architecture?

Some solutions do not work in real-time and use DB logs or additional event collection mechanism to provide SQL visibility. If we do not need blocking this architecture could be accepted but this logging is dependent on DB administrators and does not provide any segregation of duties (for example, insider can modify or switch off the logging mechanism).

How the audit data are stored and managed by DAM is another architectural question. Would you like to switch from one audit console to another to check status of your monitored environment? Would like to remember which DAM box contains data required to current analysis? And the most important do you know what kind of stored audited data will be a key in your forensic searches?
DAM solution usually monitors heterogeneous environments, cover dozens databases and gathers terabytes audit archives in the retention period.
That is why I suggest consider this:

  • possibility to manage DAM environment from one console
  • possibility to aggregate data in case of de-duplication and performance distraction
  • central reporting from all DAM boxes
  • cross-reporting based on any parameter of the audit event
  • offline forensic on restored archives

DAM is a key element of your security infrastructure. Be sure that its architecture limitation will not close possibility of development and integration

#6 – Why I do not see user names in DAM?

On SQL session level we see DB user name only. If you would like to get information about application user name related to particular SQL you need understand that this relation is created and managed by application server (queue manager).

Each DAM faces with this challenge and provides different solutions but every time it requires deeper analysis and sometimes application modification.

Guardium delivers many different solutions for Application User Translation in the pool of connection which are described here – “Guardium – App User Translation”.

Application User Translation (AUT) is a correlation process between application user and his SQL’s inside anonymised pool of connection.
Be sure that AUT does not work on simple correlation between time stamps in application and database. This kind of mapping in the multi-session channel is incredible and have no legal value.

#7 – I have SIEM, why I need DAM?

Security Information and Event Management (SIEM) systems are responsible for correlation the security events in the IT infrastructure to identify incidents. These tools base on the monitored system security logs, network activity, recognized vulnerabilities and reputation lists.

SIEM manages the security events delivered to it in the predefined schema, it is not able to understand HTTP requests of your appplication, SQL logic of your database transactions, commands executed by your administrator and so on. It expects that the monitored system will prepare the standardized output included relevant information which can be normalized and analyzed over the incident identification rules inside SIEM correlation engine.

Only DAM has ability to analyze each SQL and identify access to sensitive data, monitor privileged activity, correlate access to tables, predict the effect of taken by DML/DDL/DCL actions.

In most cases the SIEM licensing is based on EPS (Event per Second) metric. Even SIEM will contain the DAM intelligence and we would like to analyze all SQL’s inside it the cost of such a solution will be astronomical.

DAM delivers to SIEM analyzed security events in a constant data format, which enables their correlations with other monitored sources

#8 – Does your DBF work on the session or SQL level?

DAM blocking capability is often requested but it should be considered very carefully. Most application traffic to database is related to transactional statements, where set of SQL’s and their order affects the analysis carried out and its effect. If we block one of calls in this sequence we can get an exception or worse, loss of data consistency.

The business security primates – confidentiality, integrity and availability (CIA) – leads to one possible conclusion that only session reset is safe method to block access because it avoids execution incomplete transactions.
However this method is useless in the pool of connection – reset of the SQL session kills the transactions from different application sessions.
That is why blocking was actively used only for non-application access to database while the application access was monitored with whitelisting.

Guardium 10 with Query/Rewrite feature redefined this approach. Now we can analyze SQL and replace it but not in order to change transaction’s body but to inform that it is suspicious activity and cancel its execution.

from:

BEGIN TRANSACTION
...
END TRANSACTION

to:

BEGIN TRANSACTION
...
(suspicious SQL) -> (redacted to set @PARAMETER)
...
(@PARAMETER validation to cancel execution)
END TRANSACTION

It requires small changes in the application but provides “blocking” on transaction level.

Only connection reset is acceptable form of blocking in most cases. For application traffic use Query/Rewrite

PICTURES BY ULABUKA

Entitlement Reports

Each security incident analysis must answer the question of who is responsible for it. The question seems simple but the answer is not.
Who they are used to attack the credentials, who have granted them, and most importantly, whether the person who served them their own?

In the case of databases, the problem becomes further complex due to the multi-dimensional matrix of privileges.
This problem manages separate kind of security solution – Privileged Identity Management (PIM) – and provides the access accountability and session recording but even we have it we still opened to account take ownership (ATO) and service exploitation.
In these cases we should be able to answer on few important questions:
  1. What privileges had the user in the particular (incident) point of time?
  2. Whether authorizations were consistent with change management or bypassed it?
  3. Whether they were sufficient to attack?
  4. Is used account was related to the operation of the account owner?

Answer to the first question requires implementation of the full process of identity management what is not simple at all and mainly covers the database access management on the role level only.

The Guardium Entitlements Reports (ER) functionality is simple but very useful feature to quickly determine the account authorizations in the defined point of time.

New: Guardium 10 ER contains new set of reports for DB2 on iSeries.

ER Prerequisites

ER works outside the standard activity monitoring and bases on scheduled data upload to customized audit data domains. Similar to Data Classification and Vulnerability Assessment uses direct data connection to the monitored database to collect required information.

We need create appropriate technical accounts for each database where ER data will be gathered. On each Guardium appliance there are SQL scripts with role definition with all required credentials to get ER content.

You can download them over fileserver, they are located in /log/debug-logs/entitlemnts_monitor_role/

Entitlement scripts

Entitlement scripts

When the role is already created and attached to a technical account we can create a data source (Setup->Tools and Views->Datasource Definitions) for “Custom Domain

Datasource definition

Data source definition

Use plus icon to add a new data source, the example below defines MSSQL access using SSL without authentication

MSSQL datasource

MSSQL data source

Test Connection button is activated when datasource configuration will be saved (Apply).

Tip: The data source creation process can be invoked directly from ER process but for clarity was presented as separate task

Data Upload

Now we can define the data upload process. For each database we have the set of ER reports. All are located inside custom tables. For example for Oracle we can find out 14 prepared tables (all names which starts at ORA) – Reports->Report Configuration Tools->Custom Table Builder

Custom table builder

Custom table builder

We need configure data upload for each interesting us report.
Select report and push the Upload Data button

Data upload

Data upload

Add Datasource button allows add the data source for which we will create entitlement snapshots. We can point multiple data sources from earlier defined or create a new one.

Overwrite flags (per upload, per datasource) defines how the data will be stored:

  • if both flags are unselected old data will not be removed when new snapshot will arrive (each ER data record contains time stamp, that we are able to identify them in time)
  • per upload means that the old data will be rerased every time when upload will be executed – it makes sense only when particular report contains only one datasource or we would like to remove old data intentionally
  • per datasource flag ensures that the old data for currently updated datasource only will be erased – it protects the old data for datasource which are not available during current data upload

Default Purge for custom domains is executed for every day and removes data older that 60 days. This behavior can be changed (described later)

Now we can upload data manually (Run Once Now) or/and define how often the snapshot or authorization will be created (Modify Schedule)

Configured data upload

Configured data upload

It is user decision how often snapshots will be created. However some recommendation here:

  • if you overwrite data you need archive them before (using audit process)
  • data upload gets data directly from database, it is not heavy task but for large databases with thousands roles and tables the quantity of data can be huge
  • snapshots provide authorization state in the particular time, to cover forensics requirements we need also audit the DCL (grant, revoke) transactions
  • 6-24 hours schedule for snapshot is usually sufficient

    Data upload scheduler

    Data upload scheduler

The data upload configuration steps described here should be repeated for all the interesting ER custom tables.
Now we can review the uploaded data (add ER reports to your dashboard)

Predefined ER list for Informix

Predefined ER list for Informix

ER report - MSSQL - objects visible for everyone

ER report example – MSSQL objects visible for everyone

Predefined ER reports have raw format and cannot be modified so I suggest redefined them to receive the expected appearance.

ER report customization

This standard report presents all privileges and roles assigned to user on MSSQL server. You can notice that in the last 3 hours has been created 2 snapshots and we cannot filter them as the other parameters

2 snaphots in standard report

2 snaphots in standard report

I placed below some reports variations:

#1 – Last snapshot with quick data filtering

Query

Query

Report

Report

We see last snapshot from define time frame and we can filter data by user, authorization, authorization type and database

New: Guardium 10 allows hide particular columns from query. No longer query reconstruction for this purpose 🙂

Column configuration

Column configuration

#2 – List of snapshots

Query and Report

Query and Report

New: “Runtime Parameter Configuration” window separates the user defined parameters from others. No more searching the parameter list for our own 🙂

Report runtime parameters

Report runtime parameters

#3 – Number of authorization for user

Query

Query

Graphical report

Graphical report

#4 – Authorizations from particular snapshot

Unfortunately the report parameter based on time stamp can be defined with one day granularity only. It does not allow us to point specific snapshot. Really?

We can use computed attribute to create snapshot id based on snapshot time stamp:

grdapi create_computed_attribute attributeLabel="Snapshot ID" entityLabel="MSSQL2005/2008 Role/Sys Privs Granted To User" expression="MD5(SQLGUARD_TIMESTAMP)"

This command creates a new dynamically created attribute as MD5 hash string based on time stamp value.

Now I can modify snapshot list report to see this unique id

Query and Report

Query and Report

and add the snapshot id to the parameter list of any report to filter data by time stamp. Easy!

Report wit computed attribute

Report wit computed attribute

Below the example of dashboard for incident analysis inside ER report

Forensics

Forensics

We can notice in this example that badguy user authorizations have been changed between 00:45 and 00:49. Using snapshot id parameter we can present parallel these two snapshots and identify change quickly.

How to create own ER report?

Guardium delivers many different ER reports for DB2, Informix, MSSQL, MySQL, Netezza, Oracle, PostgreSQL, SAP ASE, SAP IQ and Teradata. The custom domain mechanism allows to create own reports for other databases or add additional report to cover information unavailable in the predefined ones.

The good example is MSSQL where user login status is not visible in the predefined tables. From incident management perspective this information is crucial and should be gathered.

I have prepared the SQL to get this information:

select loginname AS 'user', CASE denylogin WHEN 0 THEN 'ACTIVE' WHEN 1 THEN 'INACTIVE'END AS status, CASE isntuser WHEN 0 THEN 'LOCAL' WHEN 1 THEN 'ACTIVE DIRECTORY' END AS 'user type', dbname as 'default database' from master..syslogins order by loginname

Next, we need create a new custom table (Reports->Report Configuration Tools->Custom Table Builder). We have two possibilities, define table from scratch or import structure from SQL. I prefer the second method:

Custom table creation

Custom table creation

In the “SQL Statement” we need insert the SQL which returns sample of reference data. Add Datasource lets specify the database where sample exists. Finally we are ready to Retrive table definition

Table structure import

Table structure import

If the import was successful we return to “Custom Tables”. To review structure push Modify button

Custom table selection

Custom table selection

We can modify fields, define keys and syntax reference to the Guardium groups.

Custom table modification

Custom table modification

Now we can Apply changes and set the Upload data configuration

Data upload

Data upload

Note: Custom table definition can be modified until it does not contain data

We have data but they are not available for reporting till we create a new report domain (Report->Report Configuration Tool->Custom Domain Builder). Plus (+) icon allows create new domain. Insert “Domain name” and find out the created earlier custom table. Move it from “Available entities” to “Domain entities“. Then select default time stamp from “Timestamp Attribute” list and Apply

Custom domain selection

Custom domain creation

Our new domain is visible now in the custom query builder (Report->Report Configuration Tools->Custom Query Builder). Select domain and create all demanded queries and reports. Below report with all MSSQL logins and their status

MSSQL logins

MSSQL logins

ER data management

If we have the ability to use data collected by forensic analysis will need to set their proper retention (archive and restore). These settings are available in “Custom Table Builder” – Purge/Archive button. Archive check box ensures attach data from a custom table to data archived in the standard archive process. We can define how long data will be available locally  (Purge data older than) and schedule purge process (60 days is default value)

Custom table archive

Custom table archive

Tip: Do not forget archive the data stored in custom tables

Summary: ER is a useful tool in the forensic analysis and significantly shorten the time needed to identify permissions held by the subject of incident. The ability to customize the data presentation, scheduled data load and expansion of the area of collected information makes this tool indispensable element of SO in his duty. These data can also be used to identify privileged accounts for the proper definition of audit policy.

K-TAP installation failure on Linux is not a problem longer

One of the most important value of Guardium system is its enterprise architecture. Whether that installed to monitor one or one hundred databases we can manage the environment from one place, reconfigure it with appropriate segregation of duties and role base access control.

Monitoring of database on system to cover Database Activity Monitoring (DAM) expectations requires visibility of the all sessions (local and remote) and support all database protocols (TCP, shared memory, pipes, etc.). That is why the Guardium monitoring agent (STAP) is deeply integrated with operating system kernel (KTAP module). However the Linux distribution diversity leads to necessity to support every existing kernel version on customer sites. Before version 9.1 of Guardium this process required time (2-3 weeks) for module development and tests. Now the redeveloped KTAP can be easily recompiled and the support of the particular kernel version is not problem longer.

When should I worry about KTAP?

KTAP compilation is the installation process task and usually we do not need to pay attention on it. However, sometimes the system environment prevents the proper compilation and it is necessary to analyze the situation and take the appropriate steps.

How to check whether KTAP is switched on?

Review the “STAP Status” report and notice the value in the “KTAP Installed“. Value No means that kernel driver is not installed and activated.

STAP status

STAP status

Also the “GIM Event List” report points more detailed information.

GIM Event List

GIM Event List

We have here information that STAP does not contain module for kernel on this machine (3.10.0-229.11.1).
Then the explanation the reason of the failure – the development tools has not been installed.
The last marked message points that KTAP_ALLOW_MODULE is not set to Y. It means that STAP does not try to load other modules which are nearing to target kernel. It is accurate configuration for production environments where any errors on the kernel level are unacceptable.

KTAP compilation process reinitialization

The KTAP compilation process requires on the Linux box the cc compiler (gcc), make tool and kernel development files. Check out their existence on your machine – for RedHat use these commands:

yum list installed gcc
yum list installed make
yum list installed kernel-devel

Install missing packages

Package installation

Package installation

Now we need to reinitialize KTAP compilation process. The simplest method uses GIM to reconfigure the KTAP module. Open module selection screen and unselect “Display Only Bundles” option. Then select KTAP module and go forward – Next button

KTAP module selection

KTAP module selection

Set the KTAP_ENABLED field to 1 and Apply to Client. Execute update using Install/Update

KTAP update

KTAP update

Review update status. After a while you should receive information that KTAP is installed. If you will receive status – FAILED restart analysis again from “GIM Events List” report or analyse the log files (described later).

Update status

Update status

Now we can review “GIM Events List” report again

GIM events

GIM events

and “STAP Status

STAP Status

STAP Status

The STAP has been installed and we can start data monitoring. A careful observer will notice the appearance of an additional entryGuardium-FSM related with FAM functionality.

Important: FAM agent works on kernel level. This functionality requires the KTAP installation.

More detailed information about KTAP status we can find out in the <GIM_HOME/module/KTAP/current/KTAP.log file
This sequence points STAP installation and lack of development tools

[Fri Sep 11 20:42:21 2015] -I- Installing KTAP 10.0.0_r79963_1
[Fri Sep 11 20:42:22 2015] -I- Starting KTAP 10.0.0_r79963_1
[Fri Sep 11 20:42:23 2015] -I- Informing GIM on an event : *** KTAP MODULE WARNING MESSAGE ***
Searching for modules in /opt/guardium/GIM/modules/KTAP/10.0.0_r79963_1-1441996935/modules-*.tgz
guard_ktap_loader: File /lib/modules/3.10.0-229.11.1.el7.x86_64/build/.config not found.  Local build of KTAP will not
guard_ktap_loader: be attempted.  Please install kernel development packages for 3.10.0-229.11.1.el7.x86_64 if you wish
guard_ktap_loader: to build KTAP locally.
guard_ktap_loader: ===================================================================
guard_ktap_loader: You have elected not to load close fitting module combinations.
guard_ktap_loader: To enable close fitting combinations, reinstall bundle STAP while setting the
guard_ktap_loader: KTAP_ALLOW_MODULE_COMBOS to 'Y'
guard_ktap_loader: The in-kernel functionality will now be disabled.

and here is the fragment after compilation reinitialization

[Fri Sep 11 21:49:06 2015] -I- KTAP_ENABLED changed its value to 1 ... updating guard_tap.ini)
[Fri Sep 11 21:49:06 2015] -I- checking is ktap 79963 is loaded as part of update()
[Fri Sep 11 21:49:06 2015] -I- Starting KTAP ... for the first time
[Fri Sep 11 21:49:06 2015] -I- Informing GIM on an event : *** KTAP MODULE INSTALLER PLATFORM CHECKS MESSAGE ***

[Fri Sep 11 21:49:06 2015] -I- SEOS check - ok !
[Fri Sep 11 21:49:06 2015] -I- Trying to load KTAP as part of a start request (invoker=)
[Fri Sep 11 21:49:14 2015] Searching for modules in /opt/guardium/GIM/modules/KTAP/10.0.0_r79963_1-1441996935/modules-*.tgz
Attempting to build KTAP module using dir /lib/modules/3.10.0-229.11.1.el7.x86_64/build
guard_ktap_loader: Custom module ktap-79963-rhel-7-linux-x86_64-xCUSTOMxdblin-3.10.0-229.11.1.el7.x86_64-x86_64-SMP.ko built for kernel 3.10.0-229.11.1.el7.x86_64.

In this same directory the ktap_install.log notices additional remarks

=== Fri Sep 11 21:49:07 CEST 2015 ===
Attempting to build KTAP module using dir /lib/modules/3.10.0-229.11.1.el7.x86_64/build
Custom module ktap-79963-rhel-7-linux-x86_64-xCUSTOMxdblin-3.10.0-229.11.1.el7.x86_64-x86_64-SMP.ko built for kernel 3.10.0-229.11.1.el7.x86_64.
/sbin/modprobe  ktap ktap_build_number=79963 sys_call_table_addr=ffffffff8161c3c0 kernel_toc_addr= kernel_gp_addr=   
Install OK
Load OK

What if I cannot install development packages on system?

This situation is related with production environments but we can create package on other system (test environment) with this same kernel and later install in on the target.

Method 1 – manual INSTALLATION ON TARGET SYSTEM

The list of embeded KTAP modules in the STAP release we can review in module-<STAP-release>.tgz file:

[root@dblin current]# tar tvf modules-10.0.0_r79963_trunk_1.tgz | grep .ko | awk '{print $6}'
dummy.ko
ktap-10.0.0_r79963_trunk_1-rh7x64m-3.10.0-123.9.2.el7.x86_64-x86_64-SMP.ko
ktap-10.0.0_r79963_trunk_1-rh7x64m-3.10.0-123.el7.x86_64-x86_64-SMP.ko

After recompilation the new KTAP module is located in this same KTAP directory:

[root@dblin current]# ls *.ko
ktap-79963-rhel-7-linux-x86_64-xCUSTOMxdblin-3.10.0-229.11.1.el7.x86_64-x86_64-SMP.ko

Now we can create custom module archive – guard_ktap_append_modules command

[root@dblin current]# ./guard_ktap_append_modules 
Original MD5SUM: c467e40397957a81916e0b4f6bfb2864  ./modules-10.0.0_r79963_trunk_1.tgz

The following modules will be added to ./modules-10.0.0_r79963_trunk_1.tgz
     ./ktap-79963-rhel-7-linux-x86_64-xCUSTOMxdblin-3.10.0-229.11.1.el7.x86_64-x86_64-SMP.ko

New MD5SUM: 3718924d80ee6dbbea81594521f7fc1a  ./modules-10.0.0_r79963_trunk_1.tgz

This command adds the compiled module to modules archive. Then we can manually upload modules-<STAP_release>.tgz file on the target machine to the temporary directory and execute

guard_ktap_loader retry <tmp_dir>/modules-<STAP-release>.tgz

Then restart STAP and the new KTAP module should be recognized and installed.

METHOD 2 – KTAP MODULE TRANSFER OVER GIM

Important: If STAP_UPLOAD_FEATURE parameter is set to 1 the module recompilation process creates custom STAP GIM file and transfers it to the collector which manages this STAP

The KTAP compilation process automatically creates the STAP bundle on the appliance which manages this STAP (not to GIM server). This module can be downloaded from appliance over fileserver command from /log/gim-dist-packages directory

Fileserver

Fileserver

Tip: In the version 10 the fileserver has an additional parameter and current syntax is:
fileserver ip_address_fileserver_client duration

Then you can upload this module to GIM server and install on the target machine.

Summary:
Guardium KTAP driver can be easily created for the kernel resided on the target system. Module creation process assumes existence of Quality/Assurance procedure.

Review Guardium Installation Manager (GIM) on Windows – version 10

IBM Fix Page

IBM Fix Page

The Guardium 10 brings to us a lot of new fantastic features but the development team also improved existing functionalities. One of them is GIM.

Important: Starting with version 10 all STAP binaries are available on IBM Fix Page, the Passport Advantage customer site does not include these files. However the GIM and CAS installers are still available on PA.

GIM (Guardium Installer Manager) allows to manage all Guardium services from one place including installation, update, reconfiguration and removal. The heterogeneous and complex data access monitoring can be managed easily with GIM, that is why I always suggest my customers to use it even they are starting from one collector or do not plan to aggregate data or use the central manager.

Current version of GIM provides two types of installation – standard and listener mode. First one is known from previous version and assumes that the operating system administrator has all information to define communication between GIM and Guardium appliance and also the appliance is accessible during installation. However in the complex Guardium implementations with dozens collectors, aggregators and central manager and hundreds STAP the configuration of network topology and communication rules on firewalls and vlans requires time and synchronization of activities many different customer IT teams. In order to accelerate and facilitate this process we can install GIM in the listener mode. In this mode GIM process starts on managed system and looks forward to the initial communication from appliance. It simplifies the installation process and minimizes the work effort of IT system operating team. Later Guardium administrator will able register GIM to appropriate collector or central manager.

Another very important feature is related to security of GIM and appliance communication. Actually we can specify our own SSL certificates and provide anti-spoofing protection using the shared secret phrase.

Communication between GIM agent and appliance

  • all data transfer channels use SSL
  • port 8444 is used to secured and unauthenticated communication
  • port 8446 is used to secured and authenticated communication
  • standard mode of installation based on default appliance certificates is always registered on port 8446 – it does not allow for identification of unauthorized attempts the GIM agent registration on appliance (it suggests use non-default SSL certificates)
  • If SSL handshake based on non-default certificates fails communication is switched to port 8444
  • port 8445 is used by GIM agent in the listener mode only, if the GIM agent is successfully associated with the appliance the GIM switches to port 8446

Standard server GIM installation

  1. Download the GIM installation package (Guardium_10.0_GIM_Windows.zip) and unpack it.
  2. The GIM installer is inside the Windows_GimClient_r79461_Installer.zip archive. Copy and unpack it on the managed system.
  3. Run the installer – setup.exe (installation of GIM service requires administration privileges on the managed system.
  4. Click Next and provide the User Name and Company Name in the “Customer Information” window
  5. Next installation screen provides possibility to define non-default installation path for GIM binaries (Custom). Default path is
    c:\Program Files (x86)\Guardium\Guardium Installation Manager
  6. Start first stage of installation – Install button.
  7. Installation package contains bundled Perl distribution which will be installed but you can point the other one which will be used to run GIM process in case of Perl standardization in your company.

    Selection of Perl distribution

    Selection of Perl distribution

  8. Select “Standard Mode” of installation.

    Installation type

    Installation type

  9. Insert appliance IP address or host name.

    Appliance location

    Appliance location

  10. Point the IP address of the local network interface with access to the appliance.

    Local IP address

    Local IP address

  11. Click Finish button.
  12. Check GIM communication on appliance (Manage->Module Installation->Monitor GIM Processes)

    GIM agent status

    GIM agent status

  13. Check “GIM events” – this report is not available in the standard menu. Add it to your dashboard:
    1. Create New Dasboard (My Dashboards->Create New Dasboard)
    2. Click the “Add Report” button and add “GIM Event List” and “Unauthenticated GIM Clients” reports to your dashboard

      GIM reports

      GIM reports

  14. If both reports are empty it means that your GIM installation works properly.
  15. Check the installation status in the file:
    c:\guardiumStapLog.txt
  16. Check the GIM log:
    <GIM_HOME>\GIM\current\GIM.log

Standard installation with own SSL certificates

The SSL configuration for GIM uses this same certificates on GIM and appliance. It leads to some limitations:

  • we can use only one certificate for GIM appliance and all agents which communicates with it
  • if you plan manage GIM from Central Manager it means that all GIM agents will use this same certificate
  • certificate replacement on production will require reconfiguration all GIM agents
  • the GIM failover appliance has to share this same certificate with primary appliance
TASK1:  CERTIFICATE GENERATION

Prepare certificate for your GIM domain. You need 3 files in PEM format:

  • Certificate of your CA
  • Private key of your GIM domain certificate
  • Certificate of your GIM domain signed by your CA
TASK 2: Certificate INSTALLATION on Appliance

Login to cli account and configure GIM certificates:

store certificate gim console

This command requires paste into the private key, GIM domain certtificate and CA certificate:

Important: You can restore default GIM certificates

restore certificate gim default
TASK 3: GIM Agent installation

Copy your certificate files on managed system (c:\certificates directory in my case).
Then execute GIM installer from command line using this syntax:

setup.exe /s /z" --host=<appliance_ip> --path=<c:\\GIM_HOME> --localip=<managed_system_ip> --ca_file=<full_path_to_ca_certificate> --cert_file=<full_path_to_GIM_domain_certificate> --key_file=<full_path_to_key_of_your_GIM_domain_certificate>"

This command starts silent installation. After a while the GIM agent should be installed. Confirm status of the installation in the logs and reports.

Silent installation

Silent installation

SSL RECONFIGURATION

If the SSL certificate used by GIM agent will not correspond with SSL certificate stored on the appliance we will notice appropriate message in the status bar:

Unathenticated GIM client

Unathenticated GIM client

Also, “GIM Events List” and “Unathenticated GIM Clients” reports will include information about this situation:

GIM Reports

GIM Reports

To restore authenticated communication between GIM and the appliance we must upgrade certificates on the appliance or indicate correct files in the GIM parameters.

To upgrade GIM parameters go to “Manage->Module Installation->Setup by Client” and click on Search button

GIM client selection

GIM client selection

Select appropriate GIM client and push Next. To see GIM module we have to unselect “Display Only Bundles” option. Now we can select GIM and click on Next.

Module selection

Module selection

In the “Common Module Parameters” section point the required files and then select your GIM agent under “Client Module Parameters” and click on Apply to Selected.

Parameters assignment

Parameters assignment

Finally push Apply to Clients and Install/Update buttons and order update immediately in the pop-up window.

Update schedule

Update schedule

Check the update status – open “Installation Status” window (small “i” icon in GIM agent row inside “Client Module Parameters“) and review status, press Refresh button until disappears information about update.

Update status

Update status

Now return to the GIM reports and notice that unauthenticated GIM clients list is empty and the newest GIM event from this agent confirms accurate configuration.

GIM reports

GIM reports

Installation in listener mode

This type of installation not only allows to speed up GIM installation process, but also provides the safe transfer of administrative tasks from operating system administrators to the Guardium administrators team.

TASK 1: SSL CONFIGURATION

If you would like to use own SSL configuration please prepare certificates as described in the previous section and install them on appliance. Copy them also on managed system.

TASK 2: INSTALL GIM AGENT IN LISTENER MODE

Execute setup.exe and select “Listener Mode” this time.

Listener mode selection

Listener mode selection

Insert IP address of local network interface and then point the certificate files. If you put something wrong the installer will describe this problem.

Certificates location

Certificates location

Accept the default port used by listener (8445) or point another. In the next window specify the shared secret (on production this phrase will be defined by system administrator).

Shared secret specification

Shared secret specification

Finish the agent installation and check logs and confirm the status of “IBM Security Guardium Installation Manager” inside Windows services.

GIM windows service

GIM windows service

Note: In the version 10 the GIM service changed the name from “Guardium Installation Manager” to “IBM Security Guardium Installation Manager

You can also install the GIM agent in the listener moder from the command line:

setup.exe /s /z" --shared_secret=<your_shared_secret> --path=<c:\\GIM_HOME> --localip=<managed_system_ip> --ca_file=<full_path_to_ca_certificate> --cert_file=<full_path_to_GIM_domain_certificate> --key_file=<full_path_to_key_of_your_GIM_domain_certificate>"
TASK 3: GIM Remote ACTIVATION

The GIM agent in the listener mode can be associated only with the appliance which uses correct SSL configuration and shared secret. If the appliance will use incorrect information the GIM agent will reject communication and wait for another activation request.

Go to the activation form in the appliance portal (Manage->Module Installation->GIM Remote Activation). Insert correct information about IP address of GIM agent service, GIM Listener port and shared secret (in this place the system administrator delegates his duties to Guardium team).

GIM agent activation

GIM agent activation

Check GIM status in the portal, logs and reports. Your GIM agent should be assigned to the appliance. After the GIM agent assignment the agent is switching to secure and authenticated communication, the shared secret is no longer used.

The GIM Global Parameters (Manage->Module Installation->GIM Global Parameters) allows us to define default shared secret during GIM agent assignment. From my point of view, use this same shared secret for all agents makes sense only when customer does not provide segregation of duties between system and Guardium administrators.

GIM Auto-discovery

For the large implementations Guardium provides a new functionality to discover the GIM agents worked in the listener mode.

Go to Discover->Database Discovery->GIM Auto-discovery Configuration and add a new process (+ icon)

add new GIM discovery process

add new GIM discovery process

Provide process name and add the scope of IP addresses for the scan. Each hosts definition has to be confirmed using Add scan button

add hosts to process

add hosts to process

Run process – Run Once Now button. Then review results – View Results button. This report allows to associate discovered agent with appliance but does not provide possibility to specify shared secret (it uses default shared secret).

GIM discovery results

GIM discovery results

You can notice that my report shows gde1 machine which has the 8445 port opened but does not contain any GIM service. The GIM discovery process uses simple scan technique and identifies only whether the 8445 port is opened or not.

Summary:
GIM in Guardium 10 provides many new features to improve deployment and security.
The shared secret in the listener mode implements SoD and self-generated certificates allows us to identify the GIM agent which do not belong to our administration domain.
Possible improvements in the future:
The shared secret and SSL configuration could enable scenario with strict prevention any unauthenticated communication (communication on port 8444 is not allowed).
The SSL implementation based on this same certificates could be replaced by standard 2-way PKI based on CA