DAM in GDPR context

The rumor about GDPR provides to situation that customers receive messages that all existing security solution have “something” for that :). It is good sale strategy but definitely painful tactics for Security Officers with limited budget and hard nut to consume before 25 May, 2018.

Here I would like to review GDPR requirements (AS-IS, because still the European Data Protection Council did not provide certification guideline) from DAM perspective and review the most popular questions tied with DAM in the GDPR context.

Where DAM cover GDPR requirements?

  • Article 5.1(f) – Data protection principles assumes protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures
    DAM is dedicated solution to monitor SQL stream – granular policies can narrow access only to accepted vectors, behavioral analysis identifies anomalies, prevention rules block suspicious activity,  SQL analysis dynamically masks data and even stops execution of dangerous commands.
    Administrative fines and possible civil actions should change the PI administrators approach and consider the manual organizational measures as not sufficient.
  • Article 5.2 – Data protection demonstration based on manual processes is not efficient and sufficient.
    Only proactive or automatically reacting on correlated events solutions can cover GDPR requirements. DAM in addition to the SQL logging provides information about activity context (who, when, what), strong reporting capabilities to review analyzed incident quickly, policies which identify PI’s processing, quantitative analysis simplifies abnormal behave and self-learning engine discovering anomalies in standard access to monitored system.
    DAM blocking capabilities are unique to provide full control on privileged accounts and implement access control covering the segregation of duties demand.
  • Article 9 – Processing of special categories of personal data
    Sensitive personal data (racial or ethnic origins, political opinions, religious beliefs, genetic and biometric data and health condition or sexual orientation) included in PI’s administrator databases change strength of GDPR requirements in 2 places:

    • Article 30.5 – even company employs less that 250 workers, PI’s processing has to be recorded
    • Article 83.5(a) – administrative fines related to lack of compliance on silo with sensitive information are doubled

DAM data classification engine can identify sensitive information with minimum number of false positive results based on catalog, regular expression, dictionary or custom searches. Achieved results allow to focus on the most critical assets from GDPR perspective.
Classification and database discovery processes executed on schedule rapidly identify changes inside database schema and network assets.
Awareness where sensitive data are located is crucial to confirm efficiency of working processes for data pseudonymization and minimization.
Data lake monitoring can be implemented only in the largest corporation, the knowledge what should be protected is the first step before we spend the limited budget.

  • Article 24.1 and 24.2 – Data administrator duties
    These two articles impose a data protection obligation on the data controller as an auditable and controlled process. If we consider databases, data-warehouses, big data and file repositories the DAM was exactly created for this.
  • Article 28 – Data processor duties
    According to data processing on behalf of data controller (very common situation) the processor must guarantee that the access to PI’s takes place on written administrator authorization. Only data access monitoring can provide real access registry.
  • Article 30 – Records of processing activities – puts the requirement of the personal information access accountability
    Small companies will implement this goal by creating simple registry, based on manual data access description, sometimes enriched by approval workflow.
    However the low cost solution is tied with complexity of reporting and lack of non-repudiated registry so you should be considered better mechanism to register access to GDPR protected data.
  • Article 32.1(d) – Security of processing points vulnerability assessment and system hardening
    Popular platforms dealing with vulnerability assessment treat the relational databases harshly. DAM originated from RDBMS world provide rich checks and not only focus on CVE’s and standards (CIS, STIG). Based on years of experience it includes also analysis of SQL traffic, influence the configuration changes on the risk score, authorization snapshots and excessive rights identification.
    For most critical systems the DAM extension to existing VA solution in your environment can be very helpful.
  • Article 33.3(a) – Data breach notification imposes on the subject not only the requirement for immediate notification (3 days).
    Breach notification should contain information about scale of the leakage or other type of incident. Only DAM solutions can identify this scope (SQL audit) and minimize damages related with data owners notification and possible fines.
    Be aware that:

    • DLP’s (agent and network) covers only data on workstation and remote acceses. What about local session on servers, are you sure that your DLP provides this same SQL structure and session context analysis as specialized to this purpose DAM solutions?
    • PIM’s monitor access of privileged users to production systems. They are not aware of SQL syntax and session context. PIM should be considered in GDPR compliance program but the real value is visible when DAM and PIM are integrated together (directly or on SIEM level).
  • Article 34 – Communication of a personal data breach to the data subject
    Technically DAM solutions are able to parse output of SELECT’s but usability of this functionality is limited. The size of outgoing stream is unpredictable and can lead to situation that monitoring system should have more hardware resources that monitored one (especially on data-warehouse).
    However DAM can provide list of SQL instructions executed inside suspicious session and simplify the recognition of the attack range. In case of data modification (DML’s) audited SQL activity can directly identify changes and required remediation.

Does DAM provide protection for applications in GDPR context?

The 3-tier architecture of most applications (web client, application server, data store) anonymizes access to data on silo level. So we cannot identify application user on the SQL level basis only on the database user name which points the account from the pool of connections. However DAM can be configured to extract this information from SQL, JDBC encapsulation message, Web Server logs and other streams. In most cases this kind of integration requires additional implementation effort including in the worst case the application code change.
So, if the application user context is visible on DAM level we can utilize exactly it this same way like described earlier with two objections:

  • Never kill the session in the pool of connection because content of SQL stream inside belongs to many application users. Killed session will raise exceptions on application layer and reinitialize application session for thousands clients.
  • Never mask data or rewrite SQL in the pool of connection. Masked data in most cases will have inappropriate format and will lead to application exceptions. Even the masked data will have accepted format (data tokenization) the information receiver will not have idea about this fact and can made business or law decisions based on incorrect information – data masking for application should be implemented on application or presentation layer.
    The rewritten SQL inside SQL transaction can change it essence and leads to lost of data consistency.

DAM without application user context is still valuable in this stream to identify anomalies, errors, behavioral fluctuations using quantitative analysis.

Can DAM implement data pseudonymization?

Hmm, we should start from basic question – what pseudonymization is?
I saw many web articles which directly equals this word with data masking but I disagree with this approach.

GDPR defines pseudonymization as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.

I treat this definition as a consequence and continuation of the data minimization process. Briefly, if PI’s data will be separated from transactional ones (data minimization) the natural relation between these two stores (for example customerID) should be utilized in whole data process flow. Only on demand and with approval the customerID can be translated to form which identify the person.
Can DAM help here? – NOPE.
However the implementation of data minimization and pseudonymization for existing systems is tied with complete application redevelopment – who can afford it? So, only new GDPR-ready applications will come with this kind of functionality on board.

For existing systems we try to avoid the personal information identification using data masking and here DAM can be also helpful:

  • preproduction (test) data – why DAM instead of data tokenization?
  • access outside application stream:
    • masking of SELECT output – most DAM’s provides this functionality but efficiency is the main problem
    • query rewrite – very suitable and provides possibility to tokenize and encrypt data instead simple masking
  • access from application stream – what I mentioned earlier, application masking should be implemented on application or presentation layer only

Member states implementation of GDPR

GDPR is the regulation and unifies law in the European Union but in few regulation articles we can find some derogations. The good example is Article 9.4 where health records can be managed different way according to member state decision. Does it mean that my decision about scope and type of protection should be postponed until parliament implementation of the law?
Definitely, you should not wait because your data may contain personal information about citizens from another EU country and you can be sued based on his state law.

DAM and “right to be forgotten”

It is common question raised during DAM discussions.
The Article 17 introduces subject right to remove its personal information on request. DAM is monitoring solution and does not cooperate directly with DB engine (to cover SoD requirement) and has no authorization to modify data. So this simple explanation leads to only one correct answer on the titled question – DAM is not component which can be useful in case of implementation the citizen right to be forgotten.
By the way, who will agree with the data removal on system with thousands relations what can lead to lose data consistency which can be discovered one year later? I think that only new systems with fully implemented data minimization and psedonymization principles will be able to identify this right easy way. If all personal information are separated from transactions, the PI’s removal or simple encryption will provide suitable solution without any additional effort.

Administrative fines mantra

Have you seen any GDPR related article without remark about “huge fines up to 20 billions euro or 4% company turnover”?
Do you believe that your government will decide to kill local, average or small companies because of GDPR?
If your answers are negative you should consider much more interesting case. In the Article 82 the GDPR introduces citizen right to compensation with a body of appeal attached directly to EU Council. Many organizations seriously consider the costs of civil actions and its possible influence on business.

New type of ransomware
Standard ransomware based on data encryption is not efficient because victims pay rarely (privates are not able to pay large amount of money, backup exists, block of bitcoin account).
With GDPR the stolen data can be a simple way to force ransom from an organization wishing to avoid penalties and massive civil actions.
I think that data gathered actually from unaware companies are stored somewhere in the darknet to be starting package for new type of “business” next year. 😦

Summary:

DAM definitely should be considered as the important element of any GPDR compliance program because of:

  • PI processing monitoring
  • data classification
  • data masking
  • unauthorized data access protection
  • vulnerability assessment

and achieves the best value when it is integrated with PIM, IAM, Encryption and SIEM.

 

Advertisements

Central Manager in HA configuration

Central Management is one of the key functionality which simplifies Guardium implementation and lowers TCO. Possibility to patch, update, reconfigure and report across hundreds monitored databases is strong advantage.

Guardium implements this feature by selection one of the aggregators as a Central Manager (CM). All other Guardium infrastructure units communicate with it and synchronize information. However the CM inaccessibility disrupts this process and does not allow normal environment management. To cover these problems from version 9 the Guardium introduced the CM backup feature.

It covers two main problems:

  • planned CM shutdown (patching, upgrade)
  • CM failure

The CM backup configuration and switching between primary and secondary units need to be managed correctly to avoid problems on collector and aggregator layer.

General consideration for backup CM:

  • main CM (primary) and CM backup (secondary) need to be accessible by all appliances in the administration domain
  • quick search and outlier detection configuration should be checked after changes on CM level
  • switching between CM’s sometimes requires reassigning licenses

Note: Examples in this article refer to simple Guardium infrastructure with 4 units:

  • CM Primary (cmp, 192.168.0.80)
  • CM Backup (cmb, 192.168.0.79)
  • Collector 2 (coll2, 192.168.0.82)
  • Collector 3 (coll3, 192.168.0.83)

CM Backup registration

This procedure sets one of the aggregators belonging to Guardium management domain as a backup CM and sends this information to all units.

Only aggregator with this same patch level as primary CM can be defined as backup CM. It means that the same general, hotfix, sniffer and security patches should be installed on both machines.

2016-05-21_09-53-15

Patch list on CM primary (cmp)

2016-05-21_09-56-26

Patch list on aggregator (cmb)

Screenshots above present that both units have exactly this same patches on board. If the patch level will not be this same the aggregator cannot be promoted to backup CM role.

Note: Patch level refers to this same version of Guardium services, MySQL, Redhat and  sniffer. If one unit was patched in sequence – 1,4,20,31,34 and the second – 20,31,34 they are on this same patch level because patches 1 and 4 are included in patch 20

To point aggregator as a backup CM on primary CM go to Manage->Central Management->Central Management and push Designate Backup CM button

2016-05-21_10-46-23

Central Management view (cmp)

The pop-up window will display all aggregators which covers this same patch level with CM. Then select an aggregator and push Apply button

2016-05-21_10-52-22

backup CM selection (cmp)

Simple message will inform that task tied with backup CM started and process can be monitored

Unfortunately “Guardium Monitor” dashboard does not exist in version 10. Simple summary of this process can be monitored in “Aggregation/Archive Log” or you can create report without any filters to see all messages.

Here link to query definition – Query Definition

This same information is stored in log turbine_backup.log on CM

mysql select SQLGUARD_VERSION result is 10.0
logme   act_name= 'CM Backup' act_success='1' act_comment='Starting system backup with CM_SYNC 192.168.0.80 0'  act_day_num='now()' act_dumpfile='' act_header='1' 
****** Sun May 22 10:40:00 CEST 2016 ************
Parameters: 192.168.0.80 
function do_cm_sync
---------------
write md5 to cm_sync_file.tgz.md5
scp: /opt/IBM/Guardium/scripts/scp.exp cm_sync_file.tgz aggregator@192.168.0.80:/var/IBM/Guardium/data/importdir/cm_sync_file.tgz

Synchronization can be monitored also on backup CM aggregator in import_user_tables.log

Sun May 22 12:56:05 CEST 2016 - Import User Tables started
unit  is secondary CM
 move /var/IBM/Guardium/data/importdir/cm_sync_file.tgz.tmp to /var/IBM/Guardium/data/importdir/cm_sync_file.tgz 
number of table in DIST_INT and DATAMART tables = 19
calling /opt/IBM/Guardium/scripts/handle_agg_tables.sh
Sun May 22 12:56:13 CEST 2016 - Handle agg tables started
Sun May 22 12:56:14 CEST 2016 - Handle agg tables finished
Sun May 22 12:56:14 CEST 2016 - Import User Tables done

Synchronization is repeated with backup CM in the schedule defined under Managed Unit Portal User Synchronization

From this perspective the right thing to be considered synchronization repeated every few hours. In case of planned downtime of the CM I suggest invoke synchronization manually using Run Once Now button.

If the process finished successfully on the all units except backup CM the information about HA configuration will visible in Managed Unit list – IP addresses both CM’s

Important: To avoid “split brain” problems ensure that all managed units had possibility to refresh list of CM’s every time when IP address pair is changing

Information about list of managed units and their health status can be reached on primary CM within Central Management view

or inside Managed Units report

Promoting backup CM as a primary

Note: Switching CM functionality to a secondary server is the manual task but can be remotely instrumented using GRDAPI.

This task can be invoked from portal on a backup CM from Setup->Central Management->Make Primary CM

2016-05-21_15-43-36

Confirmation the promotion CM as primary server

or from CLI using GRDAPI command

grdapi make_primary_cm

Output from this task is located in load_secondary_cm_sync_file.log on a backup CM

2016-05-20 22:56:11 - Import CM sync info. started
2016-05-20 22:56:11 -- invoking last user sync. 
2016-05-20 22:56:22 -- unit  is secondary CM, continue 
2016-05-20 22:56:27 -- file md5 is good, continue
2016-05-20 22:58:33 -- file decrypted successfuly, continue 
2016-05-20 22:59:10 -- file unzipped successfuly, continue 
2016-05-20 22:59:10 -- unzipped file is from version 10 beforeFox=0  
2016-05-20 22:59:28 -- Tables loaded to turbine successfully
2016-05-20 22:59:28 -- not before fox  
2016-05-20 22:59:48 - copied custom classes and stuff 
2016-05-20 22:59:50 -- Import CM sync info done

After a while portal on all managed units including promoted aggregator will be restarted and we are able to see new location of primary CM (old CM will disappear from this list)

also synchronization activity will be visible on new CM

The list of units on new CM does not contain old CM to avoid “split brain”

Warning: I randomly noticed on promoted CM lack of licenses but all previously licensed features were active. However if keys will disappear they should be applied immediately

Finally new CM has been defined and all managed units updated this information.

Reconfiguration the old primary CM to get backup CM role

If a new CM promotion has been made when CM primary was active and communicated with appliances it will stop synchronization and list managed appliances on it will be empty

If promotion is related to CM failure, the old CM after restart will communicate with new one and refresh information about current status of administration domain- after few minutes the list of managed units will be cleared too.

Guardium does not provide automatic role replacement between CM’s. It requires sequence of steps.

To remove CM functionality from orphaned CM the CLI command need to be executed

delete unit type manager

It changes the appliance configuration to standalone aggregator. Then we can join it to administration domain again but this time the domain is managed by new CM (below example of registration from CLI on cmp)

register management <new_CM_ip_address> 8443

Now the old CM has aggregation function and can be delegated to get backup CM role

2016-05-22_10-29-06

backup CM selection

After this task both CM’s have reversed roles

Units patching process

Guardium administration tasks will require CM displacement only in case of the critical situation. There is no need to switch to backup CM in case of standard patching (especially when hundreds appliances will switch between CM’s). Even patch forces system reboot or stop critical services on updated unit for minutes, the temporary unavailability of unit will not stop any crucial Guardium environment functions (except temporary managed units portal unavailability). So realistic patching process should look like:

  1. patch CM
  2. patch  CM backup
  3. synchronize CM and CM backup
  4. patch other appliances in the CM administration domain.

“Split brain” situation management

Primary CM failure is not managed automatically. However this situation will be notified on all nodes during access to portal

I suggest use your existing IT monitoring system to check health of CM units using SNMP or other existing Guardium interfaces to identify problems faster and invoke new CM promotion remotely by GRDAPI.

Standard flow for manage CM failure is:

  1. Analyze CM failure
  2. If system can be restored do that instead of switch to CM Backup (especially in large environments)

If system cannot be restored:

  1. Promote backup CM to primary role
  2. Setup another aggregator as CM backup

Despite limited portal functionality on orphaned nodes the backup CM allows promote it also from GUI

I have tested two “split brain” scenarios (in small test conditions):

  • CM failure and reassign it to backup CM
  • start the stopped collector when backup CM has been promoted and old one is still unavailable

In both cases after few minutes primary CM and collector identified situation and correctly managed connection to infrastructure.

Summary:

Central Manager HA configuration is an important feature to avoid breaks in the monitoring. Its design and implementation is good however some issues with license management and new quick search features should be covered in new releases.

Data classification (Part 1) – Overview

Sensitive data discovery is a key element to create the accurate Data Governance policy. Knowledge about data location (on table and column level), relationship (how the critical data are referred) and movement (change in schema definition) are crucial in the monitoring and access protection.

Guardium provides many enhancements to identify and manage information about sensitive data both within databases, as well as the analysis of files. This article focus on data classification inside databases.

Classification process

Classification process

Classification process structure

Classification processmanually or periodically executed search job for specific data (classification policy) within defined scope (data source)

Data source – defines access to a database and scope of analyzed schemes

Classification policydefined set of classification rules with their order and relations

Classification rule – data search pattern based on supported rule type associated with rule actions

Rule action – action invoked when rule has been matched

Classification process discovers sensitive data described by classification policies within data sources and provides output for:

  • content of group of sensitive objects used in monitoring policies
  • monitoring policy modification
  • event notification (policy violation, remote system notification)
  • sensitive data reporting

Classification process flow

Classification process flow

Analysis flow

 

  1.  Guardium appliance connects to database (data source) using JDBC driver
  2. Creates list of tables, views and synonyms
  3. Gets sample of data from object
  4. Tries to match any column to defined pattern-rule
  5. For matched rule executes defined actions
  6. Repeats 4 and 5 for each rule
  7. Close connection
  8. Repeats from 1 for each data source
  9. Returns results

Classification process setup flows

Guardium 10 provides two scenarios for construction of the classification process:

  • from scratch – each element created separately, wider elements can invoke more specialized tasks. Useful for people with good Guardium skills, allows configure all existing classification features (Discover->Classification->Classification Policy Builder, Discover->Classification->Classification Process Builder)
  • end-to-end – streamline process facilitates and making easier the classification process creation and its automation. Some features are not available, can be edited later using first scenario (Discover->Classification->Classification Sensitive Data)
1

Classification menu

 Simple Classification Process – from scratch

Task description:

Find all tables and columns names where credit cards numbers are stored inside MS-SQL engine.

My database database Glottery contains table Credit_Cards in glo schema with credit card information stored inside

1

Table with sensitive data

Process creation:

Go to Classification Process Finder (Discover->Classifications->Classification Process Builder) and add a new process (+ icon)

1

Add new process

Insert process name in Process Description field and push Modify button

1

Process definition

it opens pop-up window Classification Policy Finder. Add new policy using + icon

2

Policy selection

In Classification Policy Definition view insert policy Name, Category and Classification type and save your policy using Apply button

1

Policy description

it will activate Edit Rules button, select it

1

Policy description

In Classification Policy Rules view select Add Rule button

2

Rule list

In rule view insert its name and select from Rule Type list – Search for Data

3

Rule definition

it will refresh the view and then put in Search Expression field the pattern:

^[0-9]{16}[ ]{0,20}$

which is simple representation of credit card number (16 digits, trailed by maximum 20 spaces). Then save rule using Apply button

4

Rule definition

we will return to the rule list with new created one
1Close the pop-up window. New created policy is not refreshed in process view that we need to reopen process creation window. Select again Discover->Classifications->Classification Process Builder, put name and select our policy – Find CC in Tables and press Add Datasource button

1

Policy definition

another pop-up window – Datasource Finder – displays list of existing database definitions. Use + icon to add a new one

1

Data source list

Insert Name, from Database Type select appropriate engine, put database account credentials and address IP with port on which database operates. Save definition using Apply button and return to data source list – Back

1

Data source definition

now a newly created data source is on the list. Select it and Add to process definition

2

Data source list

Now classification process contains policy and data source. We can save it – Apply button

3

Classification process

It activates Run Once Now button – process manual execution. Run it

1

Classification process

We can wait for a while or review status of process execution. Go to Discover->Classifications->Guardium Job Queue. Our job will be on the top of the list

2

Job list

Refresh report and wait for its completion. Then return to Classification process list, select Find CC process and push View Results button

3

Process list

the pop-up window will contain classification process execution results

4

Classification process results

Finally our process discovered all tables containing strings that matched simple regular expression. Notice glottery.glo.passwords table in the results which is probably has nothing to do with the credit cards data. The article continues identified various techniques for the elimination of false positive results.

 

Article continuation:

  • Part 2 – Classification rules
  • Part 3 – Action rules (soon)
  • Part 4 – Classification process and data sources (tbd)
  • Part 5 – End to End scenarios and Classification Automation (tbd)

WINSTAP (S-TAP, FS-TAP) installation and configuration – Guardium 10

WINSTAP architecture

Guardium 10 introduced new architecture and functionality into agent used to monitor data access (databases and files) on Windows platforms (well-known as a WINSTAP). The most interesting are:

  • Integrated installer for 32- and 64- bit platform
  • Redesigned TCP and SharedMemory drivers
  • File Activity Monitoring with blocking capability
  • File Discovery – integrated view on files stored on managed system
  • File Classification – sensitive data identification

The simplified view on WINSTAP architecture

WINSTAP architecture

WINSTAP architecture

shows that we have many different elements responsible for each data monitoring aspect:

  • GIM (Guardium Installation Manager) – service based on Perl responsible for installation, update and configuration all other elements working on monitored system (separate article here)
  • S-TAP service – communication with collector and data proxy for sniffer drivers (WFP, NPM) – DAM functionality
  • WFP – new sniffer driver for TCP/IP stack
  • NPM – new sniffer driver for shared memory
  • CAS (Change Audit System) – java based service responsible for identification the changes in the critical elements of database and operating system
  • FS-TAP (or STAPat) – service responsible for communication with collector and data proxy for I/O sniffer (FSMonitor) driver – FAM functionality
  • FSMonitor – I/O sniffer driver responsible for audit and blocking access to file operations
  • FAM – Feed service to collector from ICM (IBM Content Classification) infrastructure
  • file crawler – ICM process responsible for scan of file system and file metadata generation
  • analysis engine – rule based classification tool for files
  • ICM server – ICM process responsible for classification task management and configuration upload interface for ICM workbench
  • ICM workbench – Windows application to create own classification rules (decision plans)

This article focus on 2 functionalities – database and file activity monitoring. CAS and FAM (ICM) functions will be described in the separate articles.

GIM packages import

The GIM packages are located in the Guardium_10.0_GIM_WIndows.zip package available on IBM Fix Page, this same where we can find the GIM installer.

New: In G10 the CAS module is separated from WINSTAP and it has to be installed separately. It is separate archive.

Starting from version 10 we have 3 GIM modules:

  • STAP for Database and File Activity Monitoring (GIM-Kit-Windows archive)
  • FAM ICM analysis and classification tools (GIM-Kit-FAM archive)
  • CAS for Windows (CAS archive)

Extract GIM modules and import them on GIM manager appliance (Manage->Module Installation->Upload Modules). Using Browse button to select files and upload them:

Module upload

Module upload

Then import the uploaded modules – click on small “Import this module” icon and confirm this operation. After a while you will be notified that module has been imported.

Note: In this article I assume that GIM is installed on monitored system – GIM installation is described here.

Now we are able to configure modules (Manage->Module Installation->Setup by Client) on your managed system

GIM agents list

GIM agents list

To see all available modules for managed Windows system you need to uncheck “Display Only Bundles” flag

Modules list

Modules list

Now we are ready to install.

S-TAP and FS-TAP installation and configuration

WINSTAP installation

Module configuration screen has not been changed in the G10. The “Common Module Parameters” section contains the preselected parameters (the assumption most widely used). In the comparison to G9 we can notice 4 new fields for Query/Rewrite feature (firewall parameters still unavailable).
However I prefer fewer options in this section than putting them all, what we see in Linux S-TAP configuration.

Common Module Parameters” section is used to simplify module configuration. The “Apply to Selected” button saves data from this form to marked systems inside “Client Modules Parameters” section. It is useful in case when you configure 2 or more managed systems together.

WINSTAP module configuration

WINSTAP module configuration

Minimum information required to install WINSTAP module:

  • WINSTAP_INSTALL_DIR – installation directory of this module in backslash notation (i.e. C:/Guardium/WINSTAP)
  • WINSTAP_SQLGUARD_IP – collector IP assigned to this WINSTAP as a primary
  • WINSTAP_TAP_IP – only if your managed system has many network interfaces (option has to be set directly for particular agent)

Please notice that most parameters have default value and you do not need set them.

Now parameters from “Client Module Parameters” should be assigned to monitored system – Apply to Clients button. Finally installation process can be invoked using Install/Update (define when the process will start or order immediate execution – insert “Now”)

Module installation setup

Module installation setup

Check out installation status using “i” icon

Installation statusStatus “INSTALLED” confirms successful installation of WINSTAP

Installation status

Installation status

WHAT IF I NEED SET UP MORE ADVANCED FEATURES

It is available by using the WINSTAP_CMD_LINE parameter. You can put here any values in format <parameter>=<value> which are corresponds to TAP section of guard_tap.ini. Below example of installation with 3 additional parameters

Parameters in WINSTAP_CMD_LINE

Parameters in WINSTAP_CMD_LINE

and guard_tap.ini content after installation

guard_tap.ini

guard_tap.ini

New: WINSTAP 10 changed the location of guard_tap.ini from c:\Windows\System to <WINSTAP_INSTALL_DIR>\Bin

REMOTE WINSTAP RECONFIGURATION

Standard STAP modification form is available under Manage->Activity Monitoring->S-TAP Control and provides limited manageability

STAP configuration

STAP configuration

but Guardium API delivers interface to manage most existing WINSTAP parameters

grdapi update_stap_config stapHost= updateValue=SECTION.PARAMETER:VALUE waitForResponse=<0|1>

the updateValue parameter can point many WINSTAP configuration changes

updateValue=SECTION.PARAMETER1:VALUE&SECTION.PARAMETER2:VALUE

This method can work with 3 sections of guard_tap.ini

  1. TAP
  2. DB_<inspection_engine_number>
  3. SQLGUARD_<collector_ip>

And here is an example that sets the same three parameters that I used in  WINSTAP_CMD_LINE method

grdapi update_stap_config stapHost=192.168.0.20 updateValue=TAP.FIREWALL_INSTALLED:1&TAP.FIREWALL_DEFAUL_STATE:1&TAP.KRB_MSSQL_DRIVER_INSTALLED:1 waitForResponse=1

Do not forget restart S-TAP after change

grdapi restart_stap stapHost=<stap_ip>
INSPECTION ENGINES

Default installation enables database instance discovery. Current version of S-TAP discoveries installed on monitored system instances of DB2, Couch DB, Informix, Mongo DB, MSSQL and Oracle. If you would like to monitor other supported databases you need add inspection engine manually (edit S-TAP configuration in portal and “Add Inspection Engine” definition. Then push Add and Apply buttons

Inspection engine definition

Inspection engine definition

It is possible to disable instance discovery during WINSTAP installation process. The -NOAUTODISCOVERY flag has to be set in CMD_COMMAND_LINE parameter.

New in G10: Database Instance Discovery does not use Java longer

Instance discovery can be ordered manually from portal. In S-TAP Control view click on “Send Command” icon

S-TAP Control

S-TAP Control

then select “Run Database Instance Discovery” command

Send Command window

Send Command window

Be aware that “Replace Inspection Engines” flag clears all existing IE definitions. Use it if you are running the initial instance scan or intentionally you would like to replace them. Results of instance discovery are stored in “Discovered Instances” report

Discovered instances report

Discovered instances report

To compare discovered instances to actually defined in S-TAP you can use grdapi call from report. In the report bar expand Action menu and select list_inspection_engines command

API invocation from report

API invocation from report

Select one row and insert your S-TAP host IP address

list_inspection_engines call

Now output from grdapi can be compared with the last scan

grdapi output

New in 10: Action menu in the report allows to invoke Guardium API calls for all results in the related report. Very useful feature.

Instance discovery process can be executed periodically using DISCOVERY_INTERVAL=<time_in_hours> parameter. This parameter cannot be modified by grdapi and you should remember to set it during installation or later change it manually.
Base on this refreshed information we can create Audit Process to identify changes of the existing instances or detect new ones available on the host.

Tip: If S-TAP configuration parameter from TAP section cannot be changed remotely by API or does not exist form field in GIM  you always can modify it using CMD_COMMAND_LINE.

Do not forget set up the DAM policy on the collector. Default policy installed on appliance after installation – “Ignore Data Activity for Unknown Connections” – ignores all traffic.

DAM policy creation and installation available at:
Policy Builder – Protect->Security Policies->Policy Builder for Data & Applications
Policy Installer – Protect->Security Policies->Policy Installation

New in 10: Redefined S-TAP architecture in G10 allows monitor database traffic without restart machine or database.

Database activity report

Database activity report

Now you are able to monitor database traffic.

FAM FEATURE

Info: I use here FAM acronym as a reference to FS-TAP functionality. The FAM ICM features are not a part of this article

File Activity Monitoring is separately licensed. Standard installation of WINSTAP activates this feature as default. To prevent its installation put in the CMD_COMMAND_LINE the flag “-FAM OFF” (the guard_tap.ini syntax reference FSM_DRIVER_INSTALLED=0 does not work)

Important: If you do not posses FAM license, please remember switch this feature off to avoid compliance issue

Installed FAM is visible in the “S-TAP Control” list (S-TAP host with “-FAM” suffix)

FAM in S-TAP Control

FAM in S-TAP Control

Important: Default FAM settings switch off the monitoring of Administrator account. FAM policies can block access to particular files or whole file system and to protect against accidentally mistakes the files activity monitoring ignores super-users (root, Administrator). You can enable this functionality using TAP flag in guard_tap.iniFAM_PROTECT_PRIVILEGED=1. Use it on production only when your policies were tested, incorrect use can lead to crash and irreversible damage of the monitored system

FAM does not require any inspection engine definition. File monitoring is defined by separate FAM policy installed parallel to DAM.

FAM policy builder

FAM policy builder (Protect->Security Policies->Policy Builder for Files) delivers new application to create and modify the file monitoring polices. Use + icon to add new policy

FAM policy builder

FAM policy builder

Insert policy name. “Show Templates” option allows use the rules created in the other FAM policies. Add new rule using + icon

New FAM policy

New FAM policy

The rule definition screen uses a new interface logic incorporated in G10 – “End to End scenario”. In this case we are able create rule in 4 steps with the clear context of this task. Now we need insert rule name and go Next

New FAM rule - Rule Name

FAM rule – Rule Name

We define systems where rule will be evaluated. We can select particular system with FAM feature enabled

FAM rule - Datasource

FAM rule – datasource

or select/create group of systems

FAM rule - datasource group

FAM rule – datasource group

Next step defines the action type:

  1. Audit (put event to Access audit domain)
  2. Alert and Audit (1 and additional Guardium Alert event)
  3. Log As Violation and Audit (1 and mark event in the Quick Search as a violation)
  4. Block, Log As Violation and Audit (1, 3 and block I/O operation)
  5. Ignore (do nothing)
FAM rule - action

FAM rule – action

Last step defines rule criteria. We can use maximum 3 of them:

  • File path (required, defines single or group of paths, wildcards allowed)
  • User (not required, one or group of users)
  • File operation (not required, single or set of available operations)

Available qualifiers for File path:

  • = this path
  • != everything except this path
  • In Group – all paths in the group
  • Not In Group – everything except paths in this group
FAM rule - criteria - File Path

FAM rule – criteria – File Path qualifiers

This is example a file path group definition

FAM rule - criteria - file path group defintion

FAM rule – criteria – file path group definition

Criterion for User uses this same four qualifiers but related to user names. If User criterion is not appear in the rule or has no value, each user is monitored.

Access command criterion can refer to one selected operation (=) or their group (In Group). If this criterion has been removed from rule or has no value, all operations are monitored.

FAM rule - criteria - file operations

FAM rule – criteria – file operations

Tip: If you want to see all file system operations including directory structure modification leave Access command criterion empty

Two exclusive options are available in the criteria section:

  • Monitor subdirectories in file path – very useful but consider it influence on performance
  • Removable media – disables File path criterion in the rule and refers to all files on the attached media (pen drive, CD/DVD, etc.)

    FAM rule - Removable media monitoring

    FAM rule – Removable media monitoring

Rules evaluation in FAM policy is similar to DAM. Rules are evaluated from top to down. If rule matches the analyzed file event all the other rules are ignored (you cannot force the evaluation process to next rule). Use arrows icon to reorder rules in your policy

FAM policy - rule order

FAM policy – rule order

FAM policy installation

FAM policy has to be installed on collector. It is completely independent to DAM and must be installed parallel.

In the Protect->Security Policies->Policy Installation point your FAM policy in the Policy Installer section. Then select action

Policy installation

Policy installation

which is executed immediately

DAM and FAM policy installed together

DAM and FAM policy installed together

Tip: When FAM and DAM coexist together you need to manage minimum 2 polices on your collector. Use the names of easy to distinguish policies (DAM- and FAM- prefixes, for example).

Install & Override action used before G10 most frequently is not longer an option in DAM and FAM environments.

Important: Modified policy is not installed automatically on collector, you need reinstall it after change. To avoid policy deinstalation/installation use Run Once Now button in Policy Installer section (installed policy refresh)

FAM reporting

All FAM audited events are stored in the Access domain. It is example of query to provide full information about file access events

Query for FAM

Query for FAM

and report based on it

FAM Report

FAM Report

FAM QuickSearch

QuickSearch for FAM is separated from DAM. You need enable this option using grdapi:

grdapi enable_fam_crawler activity_schedule_units=<MINUTE|HOUR> activity_schedule_interval=<INTERVAL> entitlement_schedule_units=<MINUTE|HOUR> entitlement_schedule_interval=<INTERVAL>

activity_* parameters are related to events audited by policy
entitlement_* parameters are related to metadata gathered by ICM

The FAM and DAM quicksearch window can be invoked from menu bar

QuickSearch type selection

QuickSearch type selection

FAM quicksearch

Summary:
Guardium 10 introduced a lot new features and improvements for monitoring of Windows environment:
– simple installation
– wider support for instance discovery
– no reboots and restarts after agent installation
– remote configuration and management
– file activity monitoring and blocking
– file content analysis and classification

It is significant step to build integrated data governance platform

Everything you always wanted to know about DAM but were afraid to ask

#1 – What exactly the DAM is?

You can find many DAM definitions and be a little bit confused about dozens different features mentioned there but some of them is always indicated and can be considered as key requirements (DAM sensu stricto):

  • 100% visibility of the access to data
  • monitoring completely independent of database administrators
  • analysis made on SQL level
  • real time and correlated incident identification
  • audit of events related with incidents
  • support of forensic analysis

Some other features are not native for DAM but its popularity is now widely recognized as a DAM (DAM sensu lato):

  • access blocking (this feature is generally part of DAMP – Database Activity Monitoring & Protection known also as DBF – Database Firewall)
  • database user authorizations reporting
  • sensitive data identification
  • dynamic data masking (on database level)
  • vulnerability management (whatever does it mean for requestor 😉 )

We can also identify some non-functional requirements related for any security solution:

  • minimal influence on performance the monitored system
  • support the heterogeneous database environment
  • support for the enterprises

It is very difficult to compare solutions. Be sure that you compare “apples” to “apples” instead of “apples” to ” pears”. Very often the requested DAM feature works on different layer and it is covered by other solution (WAF, IPS, NG-Firewall, CM management).
Ask rather for solution support of your case and requirements than for the list the functions included in the vendor box.

#2 – Agent-base or Agent-less monitoring?

In case of DAM the answer on this question can be only one. 100% data traffic visibility is not possible if we will base on network sniffer (agent-less) because you are not able to monitor local sessions.

How your database is accessed:

  • remotely (TCP, network pipes, encrypted connection)
  • locally (TCP, shared memory, network pipes)

Only agent resided on managed environment can see local session and non-tcp protocols. It is hard to start up the polemics with this obvious statement. However some remarks are important:

  • agent installed on monitored system has affect on it – but the question is about acceptable level of this performance influence and not about choice between agent-base and agent-less architecture
  • agent requires updates, reconfiguration, database and system restarts – it can be true for particular solution but is false in case of Guardium

Only the agent-base monitoring ensures the DAM requirements coverage. Check your platform and protocols supportability. Check performance overload on your database.

Even you will be able to disable any local access to database you still assume that your network configuration is stable and all session are visible for sniffer what is not true at all.

#3 – Does your DAM prevent SQL Injection?

I love this stuff. This question is completely unrelated to SQL level, it is question about protection of web application.
If you would like to stop SQL Injection attacks the solution is easy – use WAF or IPS/NG Firewall. These types of solution work on network layer and are able to HTTP/S data de-encapsulation, parsing and identification of dangerous content (injected SQL string or its meta-form).

It is clinical example how use the one common known word in the name leads to misunderstanding the clue of the problem and its resolution.

SQL Injection must be analysed on HTTP/S layer. It has not related to DAM protection.

If your WAF or IPS will not able block the attack, the DAM will be still able to analyse the SQL syntax, session context and data reference. It is normal DAM task and should not be mistaken with SQL injection protection.

#4 – Can we build the Virtual Patch protection with DAM?

In many parts the answer is similar to SQL injection case but I will describe it deeper.

VP is a security approach to create protection outside vulnerable system. Some examples:

  • system can be exploited but patch does not exist or it cannot be installed
  • vulnerable functionality has to be available for particular subject only
  • service has low reputation and whitelisting for activity required

There is many possibilities where DAM can provide VP protection:

  • blocking access to vulnerable store procedure
  • restrict access only from defined clients
  • acceptance only defined list of SQL’s and operations on object

but if vulnerable element resides on database we need to consider situation that exploitation can lead to uncover other vector of attack. That is why VP should be defined on network layer using IPS and NG-firewall primarily.

DAM can act as an auxiliary in building VP. Network in-line protection should be considered mainly

#5 – What is your DAM data collection architecture?

Some solutions do not work in real-time and use DB logs or additional event collection mechanism to provide SQL visibility. If we do not need blocking this architecture could be accepted but this logging is dependent on DB administrators and does not provide any segregation of duties (for example, insider can modify or switch off the logging mechanism).

How the audit data are stored and managed by DAM is another architectural question. Would you like to switch from one audit console to another to check status of your monitored environment? Would like to remember which DAM box contains data required to current analysis? And the most important do you know what kind of stored audited data will be a key in your forensic searches?
DAM solution usually monitors heterogeneous environments, cover dozens databases and gathers terabytes audit archives in the retention period.
That is why I suggest consider this:

  • possibility to manage DAM environment from one console
  • possibility to aggregate data in case of de-duplication and performance distraction
  • central reporting from all DAM boxes
  • cross-reporting based on any parameter of the audit event
  • offline forensic on restored archives

DAM is a key element of your security infrastructure. Be sure that its architecture limitation will not close possibility of development and integration

#6 – Why I do not see user names in DAM?

On SQL session level we see DB user name only. If you would like to get information about application user name related to particular SQL you need understand that this relation is created and managed by application server (queue manager).

Each DAM faces with this challenge and provides different solutions but every time it requires deeper analysis and sometimes application modification.

Guardium delivers many different solutions for Application User Translation in the pool of connection which are described here – “Guardium – App User Translation”.

Application User Translation (AUT) is a correlation process between application user and his SQL’s inside anonymised pool of connection.
Be sure that AUT does not work on simple correlation between time stamps in application and database. This kind of mapping in the multi-session channel is incredible and have no legal value.

#7 – I have SIEM, why I need DAM?

Security Information and Event Management (SIEM) systems are responsible for correlation the security events in the IT infrastructure to identify incidents. These tools base on the monitored system security logs, network activity, recognized vulnerabilities and reputation lists.

SIEM manages the security events delivered to it in the predefined schema, it is not able to understand HTTP requests of your appplication, SQL logic of your database transactions, commands executed by your administrator and so on. It expects that the monitored system will prepare the standardized output included relevant information which can be normalized and analyzed over the incident identification rules inside SIEM correlation engine.

Only DAM has ability to analyze each SQL and identify access to sensitive data, monitor privileged activity, correlate access to tables, predict the effect of taken by DML/DDL/DCL actions.

In most cases the SIEM licensing is based on EPS (Event per Second) metric. Even SIEM will contain the DAM intelligence and we would like to analyze all SQL’s inside it the cost of such a solution will be astronomical.

DAM delivers to SIEM analyzed security events in a constant data format, which enables their correlations with other monitored sources

#8 – Does your DBF work on the session or SQL level?

DAM blocking capability is often requested but it should be considered very carefully. Most application traffic to database is related to transactional statements, where set of SQL’s and their order affects the analysis carried out and its effect. If we block one of calls in this sequence we can get an exception or worse, loss of data consistency.

The business security primates – confidentiality, integrity and availability (CIA) – leads to one possible conclusion that only session reset is safe method to block access because it avoids execution incomplete transactions.
However this method is useless in the pool of connection – reset of the SQL session kills the transactions from different application sessions.
That is why blocking was actively used only for non-application access to database while the application access was monitored with whitelisting.

Guardium 10 with Query/Rewrite feature redefined this approach. Now we can analyze SQL and replace it but not in order to change transaction’s body but to inform that it is suspicious activity and cancel its execution.

from:

BEGIN TRANSACTION
...
END TRANSACTION

to:

BEGIN TRANSACTION
...
(suspicious SQL) -> (redacted to set @PARAMETER)
...
(@PARAMETER validation to cancel execution)
END TRANSACTION

It requires small changes in the application but provides “blocking” on transaction level.

Only connection reset is acceptable form of blocking in most cases. For application traffic use Query/Rewrite

PICTURES BY ULABUKA