Guardium 10.1.2 – review

GPU installation

Similar to most patches it has to be installed from top to down within existing Guardium domain:

  1. Central Manager
  2. Backup Central Manager (do synchronization)
  3. Aggregators
  4. Collectors

The GPU 200 requires that the healthcheck patch 9997 is installed. 10.1.2 update can be installed on the top of any version of Guardium 10.

GPU will reboot appliance. Existing VM Tools will be automatically assign to new RedHat kernel.

Note: Consider appliance rebuild in case to use EXT-4 filesystems introduced with new ISO installer

View/Edit Mode in Dashboards

Now each dashboard opened in the GUI session works in View mode.


Dashboard in View mode

The view mode is useful in order to better use the GUI space for data, especially when dashboard is informational only.
From my point of view the Guardium administrators will not happy with that because it is not ergonomic in case of data investigation. However if dashboard has been switched to Edit mode this settings are saved in the current session.

Much more usable would be the possibility to store dashboard settings permanently per dashboard.

Deployment Health Dashboard extensions

Each new GPU adds more to Deployment Health view. Besides existed:
Deployment Health Table – notifies simple way the overall appliance status


Deployment Health Table

Deployment Health Topology – shows connectivity and topology


Deployment Health Topology

Enterprise S-TAP View – displays information about S-TAP’s across whole Guardium infrastucture


Enterprise S-TAP view

the new GPU provides:

System Resources – located in Manage->Central Management which collates information about key resources on appliances.


System Resources

Deployment Health Dashboard – customizable dashboard focused on appliance resources and performance statistics


Deployment Health Dashboard

Owing to Managed Unit Groups it is possible to create dynamic views filtered by group of appliances or focus on selected one. Statistics contain reference to Analyzer and Logger queues, buffer space, memory and disk usage and sniffer restarts.
Additionally Events timeline report presents discovered issues, it can be enriched by alerts gathered from appliances. The alert definition contains additional fields to set up result for dashboard:


Alert defintion

Data Classification Engine – task parallelization

In large environment with hundreds of databases the Guardium classification engine limitation to execute only one job in queue was very painful. Current version allows parallelize this tasks on appliance. In most cases the classification is managed on aggregators or central manager where CPU utilization is on low level, so now with new flag configured by GRDAPI we can faster and more frequently review data content.

grdapi set_classification_concurrency_limit limit=<your_limit>

The maximum limit has to be lower than 100 and not higher that numbers of available on appliance CPU cores multiplied by 2.

If you created classification policy based on many databases like this:


Classification datasources

you should change it to set of separate policies executed concurrently:


Separated datasources to different policies

Then if you start a few classification processes together they will executed parallel:


Classification Job Queue

File Activity Monitoring update

Policy builder for Files allows to create many actions per monitored resource. Now we can define different behavior in case of read, modify of file deletion.


File policy rule

The UID chain field from Session entity provides the context of user and process which is responsible for file operation.


File Activity Report

At least we have File Activity reports available out of the box


File Activity Reports

but I suggest to create the clone of the File Activities report and sort values in descending order using timestamp and sqlid (session timestamp does not ensure that events will displayed in correct order)


File Activity query definition

New appliance installer

New ISO installer simplifies the installation process of new appliances (no need to apply GPU 100 and 200). It also removes problem with new GDP licenses support on appliance below GPU 100.

The 10.1.2 installer creates EXT-4 linux filesystems and extends maximum size of supported storage. If you would like to use larger disks on the appliance the rebuild procedure is needed (GPU200 does not convert  EXT-3 to EXT-4).

FSM driver deactivation on Linux/Unix

New STAP’s for Linux/Unix supports support new TAP section parameter in guard_tap.ini:


where 0 means that FSM driver is not activated.

Only manual guard_tap.ini modification is supported at this moment.

Outlier detection (behavioral analysis) – new capabilities

Outlier detection is available for file activity now. On the appliance only one, DAM or FAM, functionality can be activated.

Behavioral analysis can be switched on aggregators. It allows analyze user behavior from wider view.

View, reports and new anomaly types introduced – significant update.

Entitlement Optimization

This GPU introduces completely new user authorizations analysis engine. Besides the old Entitlement Reports we can utilize the Entitlement Optimization tool which retrieves user roles and privileges based on direct connection to database and identified DDL commands. The tool presents the changes in the the database authorizations,


Entitlement Optimization – What’s New

reports all existing users and theirs authorizations,


Entitlement Optimizations – Users & Roles

recommends changes and vulnerabilities,


Entitlement Optimizations – Recommendations

shows entitlements per user, object or DML operation and provides possibility to analyze what-if scenarios.

Very promising extension which clarifies the view on authorizations. It supports MSSQL and Oracle (in first release) and the analysis is based from collector perspective.

GDPR Accelerator

New GDPR accelerator simplifies Guardium configuration to comply with new EU regulation which focuses on EU citizens rights in the protection of their personal data.

According to GDPR Guardium helps with:

  • personal data identification
  • monitoring of the personal data processing
  • vulnerabilities identification
  • identification of breaches
  • active protection of access by unauthorized users or suspicious sessions
  • keep the whole compliance policy updated and working as a process


    GDPR Accelerator

New Data Nodes support

GPU 200 introduced the STAP support for HP Vertica Big Data platform, Cloudera Navigator monitoring using Kafka cluster and HortonWorks with Apache Ranger – another step to supreme Guardium in Big Data platform monitoring.

Also MemSQL – very fast in-memory DB – is supported now.

Data in-sight

New type of audited data representation available – Data In-Sight – in the Investigation Board (formerly QuickSearch) . Data access in motion in 3D-view – simple example

Summary: Important step to manage data access monitoring easier and more transparent for non-technical users. GPU mainly focused on extensions exiting functionalities and make them more usable and stable.




Central Manager in HA configuration

Central Management is one of the key functionality which simplifies Guardium implementation and lowers TCO. Possibility to patch, update, reconfigure and report across hundreds monitored databases is strong advantage.

Guardium implements this feature by selection one of the aggregators as a Central Manager (CM). All other Guardium infrastructure units communicate with it and synchronize information. However the CM inaccessibility disrupts this process and does not allow normal environment management. To cover these problems from version 9 the Guardium introduced the CM backup feature.

It covers two main problems:

  • planned CM shutdown (patching, upgrade)
  • CM failure

The CM backup configuration and switching between primary and secondary units need to be managed correctly to avoid problems on collector and aggregator layer.

General consideration for backup CM:

  • main CM (primary) and CM backup (secondary) need to be accessible by all appliances in the administration domain
  • quick search and outlier detection configuration should be checked after changes on CM level
  • switching between CM’s sometimes requires reassigning licenses

Note: Examples in this article refer to simple Guardium infrastructure with 4 units:

  • CM Primary (cmp,
  • CM Backup (cmb,
  • Collector 2 (coll2,
  • Collector 3 (coll3,

CM Backup registration

This procedure sets one of the aggregators belonging to Guardium management domain as a backup CM and sends this information to all units.

Only aggregator with this same patch level as primary CM can be defined as backup CM. It means that the same general, hotfix, sniffer and security patches should be installed on both machines.


Patch list on CM primary (cmp)


Patch list on aggregator (cmb)

Screenshots above present that both units have exactly this same patches on board. If the patch level will not be this same the aggregator cannot be promoted to backup CM role.

Note: Patch level refers to this same version of Guardium services, MySQL, Redhat and  sniffer. If one unit was patched in sequence – 1,4,20,31,34 and the second – 20,31,34 they are on this same patch level because patches 1 and 4 are included in patch 20

To point aggregator as a backup CM on primary CM go to Manage->Central Management->Central Management and push Designate Backup CM button


Central Management view (cmp)

The pop-up window will display all aggregators which covers this same patch level with CM. Then select an aggregator and push Apply button


backup CM selection (cmp)

Simple message will inform that task tied with backup CM started and process can be monitored

Unfortunately “Guardium Monitor” dashboard does not exist in version 10. Simple summary of this process can be monitored in “Aggregation/Archive Log” or you can create report without any filters to see all messages.

Here link to query definition – Query Definition

This same information is stored in log turbine_backup.log on CM

mysql select SQLGUARD_VERSION result is 10.0
logme   act_name= 'CM Backup' act_success='1' act_comment='Starting system backup with CM_SYNC 0'  act_day_num='now()' act_dumpfile='' act_header='1' 
****** Sun May 22 10:40:00 CEST 2016 ************
function do_cm_sync
write md5 to cm_sync_file.tgz.md5
scp: /opt/IBM/Guardium/scripts/scp.exp cm_sync_file.tgz aggregator@

Synchronization can be monitored also on backup CM aggregator in import_user_tables.log

Sun May 22 12:56:05 CEST 2016 - Import User Tables started
unit  is secondary CM
 move /var/IBM/Guardium/data/importdir/cm_sync_file.tgz.tmp to /var/IBM/Guardium/data/importdir/cm_sync_file.tgz 
number of table in DIST_INT and DATAMART tables = 19
calling /opt/IBM/Guardium/scripts/
Sun May 22 12:56:13 CEST 2016 - Handle agg tables started
Sun May 22 12:56:14 CEST 2016 - Handle agg tables finished
Sun May 22 12:56:14 CEST 2016 - Import User Tables done

Synchronization is repeated with backup CM in the schedule defined under Managed Unit Portal User Synchronization

From this perspective the right thing to be considered synchronization repeated every few hours. In case of planned downtime of the CM I suggest invoke synchronization manually using Run Once Now button.

If the process finished successfully on the all units except backup CM the information about HA configuration will visible in Managed Unit list – IP addresses both CM’s

Important: To avoid “split brain” problems ensure that all managed units had possibility to refresh list of CM’s every time when IP address pair is changing

Information about list of managed units and their health status can be reached on primary CM within Central Management view

or inside Managed Units report

Promoting backup CM as a primary

Note: Switching CM functionality to a secondary server is the manual task but can be remotely instrumented using GRDAPI.

This task can be invoked from portal on a backup CM from Setup->Central Management->Make Primary CM


Confirmation the promotion CM as primary server

or from CLI using GRDAPI command

grdapi make_primary_cm

Output from this task is located in load_secondary_cm_sync_file.log on a backup CM

2016-05-20 22:56:11 - Import CM sync info. started
2016-05-20 22:56:11 -- invoking last user sync. 
2016-05-20 22:56:22 -- unit  is secondary CM, continue 
2016-05-20 22:56:27 -- file md5 is good, continue
2016-05-20 22:58:33 -- file decrypted successfuly, continue 
2016-05-20 22:59:10 -- file unzipped successfuly, continue 
2016-05-20 22:59:10 -- unzipped file is from version 10 beforeFox=0  
2016-05-20 22:59:28 -- Tables loaded to turbine successfully
2016-05-20 22:59:28 -- not before fox  
2016-05-20 22:59:48 - copied custom classes and stuff 
2016-05-20 22:59:50 -- Import CM sync info done

After a while portal on all managed units including promoted aggregator will be restarted and we are able to see new location of primary CM (old CM will disappear from this list)

also synchronization activity will be visible on new CM

The list of units on new CM does not contain old CM to avoid “split brain”

Warning: I randomly noticed on promoted CM lack of licenses but all previously licensed features were active. However if keys will disappear they should be applied immediately

Finally new CM has been defined and all managed units updated this information.

Reconfiguration the old primary CM to get backup CM role

If a new CM promotion has been made when CM primary was active and communicated with appliances it will stop synchronization and list managed appliances on it will be empty

If promotion is related to CM failure, the old CM after restart will communicate with new one and refresh information about current status of administration domain- after few minutes the list of managed units will be cleared too.

Guardium does not provide automatic role replacement between CM’s. It requires sequence of steps.

To remove CM functionality from orphaned CM the CLI command need to be executed

delete unit type manager

It changes the appliance configuration to standalone aggregator. Then we can join it to administration domain again but this time the domain is managed by new CM (below example of registration from CLI on cmp)

register management <new_CM_ip_address> 8443

Now the old CM has aggregation function and can be delegated to get backup CM role


backup CM selection

After this task both CM’s have reversed roles

Units patching process

Guardium administration tasks will require CM displacement only in case of the critical situation. There is no need to switch to backup CM in case of standard patching (especially when hundreds appliances will switch between CM’s). Even patch forces system reboot or stop critical services on updated unit for minutes, the temporary unavailability of unit will not stop any crucial Guardium environment functions (except temporary managed units portal unavailability). So realistic patching process should look like:

  1. patch CM
  2. patch  CM backup
  3. synchronize CM and CM backup
  4. patch other appliances in the CM administration domain.

“Split brain” situation management

Primary CM failure is not managed automatically. However this situation will be notified on all nodes during access to portal

I suggest use your existing IT monitoring system to check health of CM units using SNMP or other existing Guardium interfaces to identify problems faster and invoke new CM promotion remotely by GRDAPI.

Standard flow for manage CM failure is:

  1. Analyze CM failure
  2. If system can be restored do that instead of switch to CM Backup (especially in large environments)

If system cannot be restored:

  1. Promote backup CM to primary role
  2. Setup another aggregator as CM backup

Despite limited portal functionality on orphaned nodes the backup CM allows promote it also from GUI

I have tested two “split brain” scenarios (in small test conditions):

  • CM failure and reassign it to backup CM
  • start the stopped collector when backup CM has been promoted and old one is still unavailable

In both cases after few minutes primary CM and collector identified situation and correctly managed connection to infrastructure.


Central Manager HA configuration is an important feature to avoid breaks in the monitoring. Its design and implementation is good however some issues with license management and new quick search features should be covered in new releases.

Data classification (Part 1) – Overview

Sensitive data discovery is a key element to create the accurate Data Governance policy. Knowledge about data location (on table and column level), relationship (how the critical data are referred) and movement (change in schema definition) are crucial in the monitoring and access protection.

Guardium provides many enhancements to identify and manage information about sensitive data both within databases, as well as the analysis of files. This article focus on data classification inside databases.

Classification process

Classification process

Classification process structure

Classification processmanually or periodically executed search job for specific data (classification policy) within defined scope (data source)

Data source – defines access to a database and scope of analyzed schemes

Classification policydefined set of classification rules with their order and relations

Classification rule – data search pattern based on supported rule type associated with rule actions

Rule action – action invoked when rule has been matched

Classification process discovers sensitive data described by classification policies within data sources and provides output for:

  • content of group of sensitive objects used in monitoring policies
  • monitoring policy modification
  • event notification (policy violation, remote system notification)
  • sensitive data reporting

Classification process flow

Classification process flow

Analysis flow


  1.  Guardium appliance connects to database (data source) using JDBC driver
  2. Creates list of tables, views and synonyms
  3. Gets sample of data from object
  4. Tries to match any column to defined pattern-rule
  5. For matched rule executes defined actions
  6. Repeats 4 and 5 for each rule
  7. Close connection
  8. Repeats from 1 for each data source
  9. Returns results

Classification process setup flows

Guardium 10 provides two scenarios for construction of the classification process:

  • from scratch – each element created separately, wider elements can invoke more specialized tasks. Useful for people with good Guardium skills, allows configure all existing classification features (Discover->Classification->Classification Policy Builder, Discover->Classification->Classification Process Builder)
  • end-to-end – streamline process facilitates and making easier the classification process creation and its automation. Some features are not available, can be edited later using first scenario (Discover->Classification->Classification Sensitive Data)

Classification menu

 Simple Classification Process – from scratch

Task description:

Find all tables and columns names where credit cards numbers are stored inside MS-SQL engine.

My database database Glottery contains table Credit_Cards in glo schema with credit card information stored inside


Table with sensitive data

Process creation:

Go to Classification Process Finder (Discover->Classifications->Classification Process Builder) and add a new process (+ icon)


Add new process

Insert process name in Process Description field and push Modify button


Process definition

it opens pop-up window Classification Policy Finder. Add new policy using + icon


Policy selection

In Classification Policy Definition view insert policy Name, Category and Classification type and save your policy using Apply button


Policy description

it will activate Edit Rules button, select it


Policy description

In Classification Policy Rules view select Add Rule button


Rule list

In rule view insert its name and select from Rule Type list – Search for Data


Rule definition

it will refresh the view and then put in Search Expression field the pattern:

^[0-9]{16}[ ]{0,20}$

which is simple representation of credit card number (16 digits, trailed by maximum 20 spaces). Then save rule using Apply button


Rule definition

we will return to the rule list with new created one
1Close the pop-up window. New created policy is not refreshed in process view that we need to reopen process creation window. Select again Discover->Classifications->Classification Process Builder, put name and select our policy – Find CC in Tables and press Add Datasource button


Policy definition

another pop-up window – Datasource Finder – displays list of existing database definitions. Use + icon to add a new one


Data source list

Insert Name, from Database Type select appropriate engine, put database account credentials and address IP with port on which database operates. Save definition using Apply button and return to data source list – Back


Data source definition

now a newly created data source is on the list. Select it and Add to process definition


Data source list

Now classification process contains policy and data source. We can save it – Apply button


Classification process

It activates Run Once Now button – process manual execution. Run it


Classification process

We can wait for a while or review status of process execution. Go to Discover->Classifications->Guardium Job Queue. Our job will be on the top of the list


Job list

Refresh report and wait for its completion. Then return to Classification process list, select Find CC process and push View Results button


Process list

the pop-up window will contain classification process execution results


Classification process results

Finally our process discovered all tables containing strings that matched simple regular expression. Notice glottery.glo.passwords table in the results which is probably has nothing to do with the credit cards data. The article continues identified various techniques for the elimination of false positive results.


Article continuation:

  • Part 2 – Classification rules
  • Part 3 – Action rules (soon)
  • Part 4 – Classification process and data sources (tbd)
  • Part 5 – End to End scenarios and Classification Automation (tbd)

Everything you always wanted to know about DAM but were afraid to ask

#1 – What exactly the DAM is?

You can find many DAM definitions and be a little bit confused about dozens different features mentioned there but some of them is always indicated and can be considered as key requirements (DAM sensu stricto):

  • 100% visibility of the access to data
  • monitoring completely independent of database administrators
  • analysis made on SQL level
  • real time and correlated incident identification
  • audit of events related with incidents
  • support of forensic analysis

Some other features are not native for DAM but its popularity is now widely recognized as a DAM (DAM sensu lato):

  • access blocking (this feature is generally part of DAMP – Database Activity Monitoring & Protection known also as DBF – Database Firewall)
  • database user authorizations reporting
  • sensitive data identification
  • dynamic data masking (on database level)
  • vulnerability management (whatever does it mean for requestor 😉 )

We can also identify some non-functional requirements related for any security solution:

  • minimal influence on performance the monitored system
  • support the heterogeneous database environment
  • support for the enterprises

It is very difficult to compare solutions. Be sure that you compare “apples” to “apples” instead of “apples” to ” pears”. Very often the requested DAM feature works on different layer and it is covered by other solution (WAF, IPS, NG-Firewall, CM management).
Ask rather for solution support of your case and requirements than for the list the functions included in the vendor box.

#2 – Agent-base or Agent-less monitoring?

In case of DAM the answer on this question can be only one. 100% data traffic visibility is not possible if we will base on network sniffer (agent-less) because you are not able to monitor local sessions.

How your database is accessed:

  • remotely (TCP, network pipes, encrypted connection)
  • locally (TCP, shared memory, network pipes)

Only agent resided on managed environment can see local session and non-tcp protocols. It is hard to start up the polemics with this obvious statement. However some remarks are important:

  • agent installed on monitored system has affect on it – but the question is about acceptable level of this performance influence and not about choice between agent-base and agent-less architecture
  • agent requires updates, reconfiguration, database and system restarts – it can be true for particular solution but is false in case of Guardium

Only the agent-base monitoring ensures the DAM requirements coverage. Check your platform and protocols supportability. Check performance overload on your database.

Even you will be able to disable any local access to database you still assume that your network configuration is stable and all session are visible for sniffer what is not true at all.

#3 – Does your DAM prevent SQL Injection?

I love this stuff. This question is completely unrelated to SQL level, it is question about protection of web application.
If you would like to stop SQL Injection attacks the solution is easy – use WAF or IPS/NG Firewall. These types of solution work on network layer and are able to HTTP/S data de-encapsulation, parsing and identification of dangerous content (injected SQL string or its meta-form).

It is clinical example how use the one common known word in the name leads to misunderstanding the clue of the problem and its resolution.

SQL Injection must be analysed on HTTP/S layer. It has not related to DAM protection.

If your WAF or IPS will not able block the attack, the DAM will be still able to analyse the SQL syntax, session context and data reference. It is normal DAM task and should not be mistaken with SQL injection protection.

#4 – Can we build the Virtual Patch protection with DAM?

In many parts the answer is similar to SQL injection case but I will describe it deeper.

VP is a security approach to create protection outside vulnerable system. Some examples:

  • system can be exploited but patch does not exist or it cannot be installed
  • vulnerable functionality has to be available for particular subject only
  • service has low reputation and whitelisting for activity required

There is many possibilities where DAM can provide VP protection:

  • blocking access to vulnerable store procedure
  • restrict access only from defined clients
  • acceptance only defined list of SQL’s and operations on object

but if vulnerable element resides on database we need to consider situation that exploitation can lead to uncover other vector of attack. That is why VP should be defined on network layer using IPS and NG-firewall primarily.

DAM can act as an auxiliary in building VP. Network in-line protection should be considered mainly

#5 – What is your DAM data collection architecture?

Some solutions do not work in real-time and use DB logs or additional event collection mechanism to provide SQL visibility. If we do not need blocking this architecture could be accepted but this logging is dependent on DB administrators and does not provide any segregation of duties (for example, insider can modify or switch off the logging mechanism).

How the audit data are stored and managed by DAM is another architectural question. Would you like to switch from one audit console to another to check status of your monitored environment? Would like to remember which DAM box contains data required to current analysis? And the most important do you know what kind of stored audited data will be a key in your forensic searches?
DAM solution usually monitors heterogeneous environments, cover dozens databases and gathers terabytes audit archives in the retention period.
That is why I suggest consider this:

  • possibility to manage DAM environment from one console
  • possibility to aggregate data in case of de-duplication and performance distraction
  • central reporting from all DAM boxes
  • cross-reporting based on any parameter of the audit event
  • offline forensic on restored archives

DAM is a key element of your security infrastructure. Be sure that its architecture limitation will not close possibility of development and integration

#6 – Why I do not see user names in DAM?

On SQL session level we see DB user name only. If you would like to get information about application user name related to particular SQL you need understand that this relation is created and managed by application server (queue manager).

Each DAM faces with this challenge and provides different solutions but every time it requires deeper analysis and sometimes application modification.

Guardium delivers many different solutions for Application User Translation in the pool of connection which are described here – “Guardium – App User Translation”.

Application User Translation (AUT) is a correlation process between application user and his SQL’s inside anonymised pool of connection.
Be sure that AUT does not work on simple correlation between time stamps in application and database. This kind of mapping in the multi-session channel is incredible and have no legal value.

#7 – I have SIEM, why I need DAM?

Security Information and Event Management (SIEM) systems are responsible for correlation the security events in the IT infrastructure to identify incidents. These tools base on the monitored system security logs, network activity, recognized vulnerabilities and reputation lists.

SIEM manages the security events delivered to it in the predefined schema, it is not able to understand HTTP requests of your appplication, SQL logic of your database transactions, commands executed by your administrator and so on. It expects that the monitored system will prepare the standardized output included relevant information which can be normalized and analyzed over the incident identification rules inside SIEM correlation engine.

Only DAM has ability to analyze each SQL and identify access to sensitive data, monitor privileged activity, correlate access to tables, predict the effect of taken by DML/DDL/DCL actions.

In most cases the SIEM licensing is based on EPS (Event per Second) metric. Even SIEM will contain the DAM intelligence and we would like to analyze all SQL’s inside it the cost of such a solution will be astronomical.

DAM delivers to SIEM analyzed security events in a constant data format, which enables their correlations with other monitored sources

#8 – Does your DBF work on the session or SQL level?

DAM blocking capability is often requested but it should be considered very carefully. Most application traffic to database is related to transactional statements, where set of SQL’s and their order affects the analysis carried out and its effect. If we block one of calls in this sequence we can get an exception or worse, loss of data consistency.

The business security primates – confidentiality, integrity and availability (CIA) – leads to one possible conclusion that only session reset is safe method to block access because it avoids execution incomplete transactions.
However this method is useless in the pool of connection – reset of the SQL session kills the transactions from different application sessions.
That is why blocking was actively used only for non-application access to database while the application access was monitored with whitelisting.

Guardium 10 with Query/Rewrite feature redefined this approach. Now we can analyze SQL and replace it but not in order to change transaction’s body but to inform that it is suspicious activity and cancel its execution.




(suspicious SQL) -> (redacted to set @PARAMETER)
(@PARAMETER validation to cancel execution)

It requires small changes in the application but provides “blocking” on transaction level.

Only connection reset is acceptable form of blocking in most cases. For application traffic use Query/Rewrite


Entitlement Reports

Each security incident analysis must answer the question of who is responsible for it. The question seems simple but the answer is not.
Who they are used to attack the credentials, who have granted them, and most importantly, whether the person who served them their own?

In the case of databases, the problem becomes further complex due to the multi-dimensional matrix of privileges.
This problem manages separate kind of security solution – Privileged Identity Management (PIM) – and provides the access accountability and session recording but even we have it we still opened to account take ownership (ATO) and service exploitation.
In these cases we should be able to answer on few important questions:
  1. What privileges had the user in the particular (incident) point of time?
  2. Whether authorizations were consistent with change management or bypassed it?
  3. Whether they were sufficient to attack?
  4. Is used account was related to the operation of the account owner?

Answer to the first question requires implementation of the full process of identity management what is not simple at all and mainly covers the database access management on the role level only.

The Guardium Entitlements Reports (ER) functionality is simple but very useful feature to quickly determine the account authorizations in the defined point of time.

New: Guardium 10 ER contains new set of reports for DB2 on iSeries.

ER Prerequisites

ER works outside the standard activity monitoring and bases on scheduled data upload to customized audit data domains. Similar to Data Classification and Vulnerability Assessment uses direct data connection to the monitored database to collect required information.

We need create appropriate technical accounts for each database where ER data will be gathered. On each Guardium appliance there are SQL scripts with role definition with all required credentials to get ER content.

You can download them over fileserver, they are located in /log/debug-logs/entitlemnts_monitor_role/

Entitlement scripts

Entitlement scripts

When the role is already created and attached to a technical account we can create a data source (Setup->Tools and Views->Datasource Definitions) for “Custom Domain

Datasource definition

Data source definition

Use plus icon to add a new data source, the example below defines MSSQL access using SSL without authentication

MSSQL datasource

MSSQL data source

Test Connection button is activated when datasource configuration will be saved (Apply).

Tip: The data source creation process can be invoked directly from ER process but for clarity was presented as separate task

Data Upload

Now we can define the data upload process. For each database we have the set of ER reports. All are located inside custom tables. For example for Oracle we can find out 14 prepared tables (all names which starts at ORA) – Reports->Report Configuration Tools->Custom Table Builder

Custom table builder

Custom table builder

We need configure data upload for each interesting us report.
Select report and push the Upload Data button

Data upload

Data upload

Add Datasource button allows add the data source for which we will create entitlement snapshots. We can point multiple data sources from earlier defined or create a new one.

Overwrite flags (per upload, per datasource) defines how the data will be stored:

  • if both flags are unselected old data will not be removed when new snapshot will arrive (each ER data record contains time stamp, that we are able to identify them in time)
  • per upload means that the old data will be rerased every time when upload will be executed – it makes sense only when particular report contains only one datasource or we would like to remove old data intentionally
  • per datasource flag ensures that the old data for currently updated datasource only will be erased – it protects the old data for datasource which are not available during current data upload

Default Purge for custom domains is executed for every day and removes data older that 60 days. This behavior can be changed (described later)

Now we can upload data manually (Run Once Now) or/and define how often the snapshot or authorization will be created (Modify Schedule)

Configured data upload

Configured data upload

It is user decision how often snapshots will be created. However some recommendation here:

  • if you overwrite data you need archive them before (using audit process)
  • data upload gets data directly from database, it is not heavy task but for large databases with thousands roles and tables the quantity of data can be huge
  • snapshots provide authorization state in the particular time, to cover forensics requirements we need also audit the DCL (grant, revoke) transactions
  • 6-24 hours schedule for snapshot is usually sufficient

    Data upload scheduler

    Data upload scheduler

The data upload configuration steps described here should be repeated for all the interesting ER custom tables.
Now we can review the uploaded data (add ER reports to your dashboard)

Predefined ER list for Informix

Predefined ER list for Informix

ER report - MSSQL - objects visible for everyone

ER report example – MSSQL objects visible for everyone

Predefined ER reports have raw format and cannot be modified so I suggest redefined them to receive the expected appearance.

ER report customization

This standard report presents all privileges and roles assigned to user on MSSQL server. You can notice that in the last 3 hours has been created 2 snapshots and we cannot filter them as the other parameters

2 snaphots in standard report

2 snaphots in standard report

I placed below some reports variations:

#1 – Last snapshot with quick data filtering





We see last snapshot from define time frame and we can filter data by user, authorization, authorization type and database

New: Guardium 10 allows hide particular columns from query. No longer query reconstruction for this purpose 🙂

Column configuration

Column configuration

#2 – List of snapshots

Query and Report

Query and Report

New: “Runtime Parameter Configuration” window separates the user defined parameters from others. No more searching the parameter list for our own 🙂

Report runtime parameters

Report runtime parameters

#3 – Number of authorization for user



Graphical report

Graphical report

#4 – Authorizations from particular snapshot

Unfortunately the report parameter based on time stamp can be defined with one day granularity only. It does not allow us to point specific snapshot. Really?

We can use computed attribute to create snapshot id based on snapshot time stamp:

grdapi create_computed_attribute attributeLabel="Snapshot ID" entityLabel="MSSQL2005/2008 Role/Sys Privs Granted To User" expression="MD5(SQLGUARD_TIMESTAMP)"

This command creates a new dynamically created attribute as MD5 hash string based on time stamp value.

Now I can modify snapshot list report to see this unique id

Query and Report

Query and Report

and add the snapshot id to the parameter list of any report to filter data by time stamp. Easy!

Report wit computed attribute

Report wit computed attribute

Below the example of dashboard for incident analysis inside ER report



We can notice in this example that badguy user authorizations have been changed between 00:45 and 00:49. Using snapshot id parameter we can present parallel these two snapshots and identify change quickly.

How to create own ER report?

Guardium delivers many different ER reports for DB2, Informix, MSSQL, MySQL, Netezza, Oracle, PostgreSQL, SAP ASE, SAP IQ and Teradata. The custom domain mechanism allows to create own reports for other databases or add additional report to cover information unavailable in the predefined ones.

The good example is MSSQL where user login status is not visible in the predefined tables. From incident management perspective this information is crucial and should be gathered.

I have prepared the SQL to get this information:

select loginname AS 'user', CASE denylogin WHEN 0 THEN 'ACTIVE' WHEN 1 THEN 'INACTIVE'END AS status, CASE isntuser WHEN 0 THEN 'LOCAL' WHEN 1 THEN 'ACTIVE DIRECTORY' END AS 'user type', dbname as 'default database' from master..syslogins order by loginname

Next, we need create a new custom table (Reports->Report Configuration Tools->Custom Table Builder). We have two possibilities, define table from scratch or import structure from SQL. I prefer the second method:

Custom table creation

Custom table creation

In the “SQL Statement” we need insert the SQL which returns sample of reference data. Add Datasource lets specify the database where sample exists. Finally we are ready to Retrive table definition

Table structure import

Table structure import

If the import was successful we return to “Custom Tables”. To review structure push Modify button

Custom table selection

Custom table selection

We can modify fields, define keys and syntax reference to the Guardium groups.

Custom table modification

Custom table modification

Now we can Apply changes and set the Upload data configuration

Data upload

Data upload

Note: Custom table definition can be modified until it does not contain data

We have data but they are not available for reporting till we create a new report domain (Report->Report Configuration Tool->Custom Domain Builder). Plus (+) icon allows create new domain. Insert “Domain name” and find out the created earlier custom table. Move it from “Available entities” to “Domain entities“. Then select default time stamp from “Timestamp Attribute” list and Apply

Custom domain selection

Custom domain creation

Our new domain is visible now in the custom query builder (Report->Report Configuration Tools->Custom Query Builder). Select domain and create all demanded queries and reports. Below report with all MSSQL logins and their status

MSSQL logins

MSSQL logins

ER data management

If we have the ability to use data collected by forensic analysis will need to set their proper retention (archive and restore). These settings are available in “Custom Table Builder” – Purge/Archive button. Archive check box ensures attach data from a custom table to data archived in the standard archive process. We can define how long data will be available locally  (Purge data older than) and schedule purge process (60 days is default value)

Custom table archive

Custom table archive

Tip: Do not forget archive the data stored in custom tables

Summary: ER is a useful tool in the forensic analysis and significantly shorten the time needed to identify permissions held by the subject of incident. The ability to customize the data presentation, scheduled data load and expansion of the area of collected information makes this tool indispensable element of SO in his duty. These data can also be used to identify privileged accounts for the proper definition of audit policy.