Public Key Authentication with SSH – PuTTY

Guardium 10.5 allows authenticate on cli accounts by public keys.

The configuration is simple but is not well described at this moment in the standard documentation so I have decided to publish this short post.

I present here the most popular case where the SSH access is based on PuTTy.

Step 1 – PuTTy configuration

I suggest use the puttygen.exe to create SSH keys (can be downloaded as a supporting tool from PuTTy home page – link)

I push Generate button to create keys. The default settings points the RSA algorithm and 2048 bits key length. For production purposes I suggested use the longer keys.

2018-05-04_16-58-47

Now we need to save the private key in the place available for PuTTY. I strongly suggest provide the key passphrase to secure private key, it will be inserted for any session opened with the generated here the keys.2018-05-04_17-01-51

Then we need also save somewhere the public key.2018-05-04_17-04-24

Step 2 – Collector setup

This actions have to be repeated on each appliance which will support the authentication using the public key infrastructure.

The appliance configuration requires the setup of appliance keys and import public keys for all Guardium administrators which are allowed to login on cli account. Of course the clue of the public key infrastructure is the keys uniqueness per administrator what does allow us to control access even for shared accounts.

Action 1 – Appliance keys generation

From cli (still logged using password) execute command:

show system public key cli

The output will inform that there is no keys on the appliance and they will be generated.2018-05-04_15-44-51

The message will also display just generated public key. In case of PuTTy configuration we do not need to copy it.

There is also possible deletion of existing keys using command:

store system public key reset

The appliance keys removal will stop access to system using public key infrastructure for all registered users. To restore configuration after appliance keys deletion we need execute again the command:

show system public key cli

Action 2 – Client public key import

The import of user public keys is possible by use command:

store system public key authorized

The command will expect the client public key in Open SSH format inserted in one line:

ssh-rsa <key> <comment label>

but the exported public key from Step 1 has been stored by puttygen in the standard format and should be reformatted to supported by Guardium one.

2018-05-04_17-44-59 So in this case the command which registers my PuTTy client on the appliance looks like that:

2018-05-04_17-59-35

We can review the list of registered client using command:

show system public key authorized

To remove particular client access we can use command:

delete system public key authorized

Step 3 – Putty session configuration

Now we can configure our PuTTY to use the generated keys. I have created new session (MySSHPKI) to login on appliance as cli user.

2018-05-04_18-31-47

and I set the location of my private key inside Connection/SSH/Auth configuration view and saved the session settings.

2018-05-04_19-02-06

Step 4 – Connection test

The ssh connection asked me for my private key passphrase and I will able finally login to the appliance without Guardium password.

2018-05-04_19-06-51

I suggest this kind of configuration for all production systems. It allows control access to system and quickly remove access to Guardium infrastructure by removing the public key from the list of accepted on the appliance.

Still configuration has to be managed on each appliance separately and there is not internal audit trail for key used during cli connections but I believe that these improvements will be implemented soon.

Advertisements

KTAP installation on Linux – video

This video covers most KTAP installation challenges on Linux platform.

Chapters timeline:

  1. Introduction 0’00”
  2. STAP installation in new GIM “Setup by Client” application 0’30”
  3. KTAP initialization problems identification 3’08”
  4. Local KTAP compilation 5’58”
  5. Installation in Combo mode 8’54”
  6. KTAP installation flow 11’01”
  7. TEST & PROD scenario – STAP installation 11’58”
  8. TEST & PROD scenario – STAP upgrade 15’15”
  9. TEST & PROD scenario – Linux kernel upgrade 16’55”

Link: https://www.youtube.com/watch?v=77QQT7Rjlc0

Summary:

KTAP initialization on Linux is challenging task for new Guardium beginners.
The new portal update in GIM 10.1.4 make these tasks more clear and simple.

Video presents the most common situations related to KTAP in STAP life-cycle management on Linux platform.

Remarks:

Please notice that custom modules can be installed on machines where GIM client flag GIM_ALLOW_CUSTOMED_BUNDLES is set 1.

So in all scenarios where 8XX modules are applies please assume this setting.

 

 

Guardium Reports Platform understanding (2)

Full SQL and SQL monitoring, deeper view on Access domain

For better understanding SQL entity we need to describe a little bit deeper the logging actions in Guardium policy.
My audit policy (selective audit trail) contains two rules.
2017-10-09_12-33-50It will log activity of syntaxuser1 using LOG action (LOG ONLY) and other traffic will be audited with details – LOG FULL DETAILS action.
I connected to postgreSQL database two times as a test and syntaxuser1
2017-10-09_13-08-34and these sessions are visible (right) but report based on Full SQL entity does not contain syntaxuser1 activity (left).
The reason is simple and understanding of this is very important to create accurate database monitoring policy and reports. The LOG ONLY action logs SQL constructs and does not audit full SQL body executed inside session.

So what does exactly LOG ONLY log?!

The LOG ONLY (it is also default action for non selective audit trail policies!) removes SQL parameter values from SQL body. For instance 3 SQL’s:

SELECT * FROM table WHERE columnX='value1'
SELECT * FROM table WHERE columnX='value2'
SELECT * FROM table WHERE columnX='value3'

are described as a one SQL construct

SELECT * FROM table WHERE columnX='?'

so audited activity based on LOG ONLY action allows identify syntax but it is not possible to present full body (if SQL contains parameters).
The main purpose of LOG ONLY action use is the meaningful decrease of disk space consumption by audited traffic because we do not need store each SQL and put only reference (Construct ID) to known by collector SQL constructs stored in SQL entity.

It is a good time to introduce very important entity – Access Period. Independently to FULL SQL flow (described in part 1) Guardium stores audited activity inside the hourly based sets named periods – we can visualize them as data partitions. Now we can discuss sense of this kind approach but it was historic decision (more that 10 years ago) based among others on cost of storage and CPU utilization.

Periods describe the all audited traffic on hour basis and simplify data partitioning and point executed in this timeframe SQL’s by Instance ID and Construct ID keys . So we can present data inside entities this way
gn23Data flow in 5 main entities:

  • policy makes decision to log activity (LOG ONLY or LOG FULL DETAILS)
  • if SQL is related to new session – new Session ID is registered and Access ID is attached to it or new connection profile is registered
  • system checks – is Session ID registered in the current period (current hour)?
    • False – new Instance ID is created in Access Period entity
  • if LOG ONLY action is used
    • the SQL is “anonymized” – parameter values are replaced by question mark
    • system checks existence of SQL construct in SQL Entity
      • False – new Construct ID is registered in SQL Entity
    • new record (access) is attached to current period (partition) in the Access Period entity – Access ID, relations to SQL (Construct ID) and Session (Session ID)
  • if LOG FULL DETAILS action is used
    • the SQL is registered in FULL SQL with reference to Session ID
    • SQL is “anonymized” and registered in Access Period (Instance ID) this same way like described for LOG ONLY action
    • Record in FULL SQL stores reference to Instance ID in Access Period

We should be aware some limitations if LOG ONLY action is used to audit session:

  • data are stored without parameter values
  • we cannot identify exact time of SQL execution, we can estimate time by reference to:
    • Session timestamps – between Session Start and Session End
    • Access Period – SQL execution inside partition, between Period Start and Period End
    • Access Period Timestamp – last execution of particular SQL construct inside period instance
  • SQL’s from one session can be located in many periods (partitions) if session involves many hours
  • Both logging actions can be used inside policy to audit activity from this same session (it is powerful) but it can lead to incorrect conclusions if we base on FULL SQL report only
  • The Period Start is used as a timestamp for Access Period Entity (the Timestamp field has not important value)

It should be also stressed that LOG FULL DETAILS action stores SQL in both entities Full SQL and SQL

So, from theory to practice 🙂

Example 1 – No space on disk, no data in reports

It happens when we use LOG ONLY action in our policies and we try to review activity in the report based on FULL SQL entity
2017-10-09_18-28-05
SQL counter identifies thousands constructs (report based on SQL entity) but SQL syntax report is empty (based on FULL SQL) – my audit policy uses LOG ONLY action. If you log events using LOG ONLY action somewhere you should not report data based on FULL SQL to report them.

Example 2 – Where is timestamp for SQL entity?

I created a query based on main entity – SQL
2017-10-09_18-48-09
and you should notice that there is no timestamp inside the available in SQL entity fields. What does it mean? Can we create report base on it and use time period specification for results?
Yes, we can, because query gathers timestamp from the closest (direct) relation if it does not exists in the main entity.
The direct relation for SQL entity is Access Period which use Period Start field as timestamp. It has a big influence on result 🙂

I connected to database two times and executed simple query in each session
2017-10-09_19-08-54then I tried to display my activity related to second session only (base on Start Date after 19:18:00)
2017-10-09_19-10-35
and both sessions are visible. The explanation is simple – timestamp for this report bases on Period Start field what for the input value 19:18 defines the period 19:00-20:00 (7-8 am) where two Test user sessions happened.
There is no possibility to granular time different way (more in Example 5) because it is main entity dependent. How to display SQL’s belonging only to the second session? – we can use Session ID for example as a filter
2017-10-09_19-26-31I put Session ID of interesting me session in the added filter and “voila”2017-10-09_19-28-12

Example 3 – I do not see part of my SQL’s – situation 1

This time I executed 6 SQL’s but only two of them are displayed in my report based on SQL entity.
2017-10-09_22-16-45What I pointed before, the SQL entity stores constructs instead of FULL SQL body so SELECT 1, SELECT 2 and SELECT 3 are visible here as SELECT ? and 3 executions of SELECT now() also point only one SQL construct. Please notice also that Timestamp in Access Period entity points the time closest to the last execution of particular SQL construct.
Does it mean that we cannot identify the exact number of executed SQL’s if LOG ONLY action is used? – of course, we can 🙂 The number of occurrences of constructs are stored in Access Period and we can refer to it from SQL entity using entity counter (Add Count)
2017-10-09_22-29-07Now we have full information that my session contained two SQL constructs and each of them was executed three times.
The SQL entity does not store execution timestamp so order of executed constructs is unknown.

Example 4 – I do not see part of my SQL’s – situation 2

This time I executed 8 SQL’s inside session
2017-10-10_09-39-26and only four appear in the report. You should notice that my session lifetime covers 2 periods (08:00-09:00 and 09:00-10:00) but report time range refers only to the second one. In Session Start column we have information when session started and my period reference has to point to it if I would like to receive full session statistics
2017-10-10_09-51-07

Example 5 – One hour granularity is not foxy

Guardium allows decrease the default access period time granularity from one hour to 1 minute even.
2017-10-21_10-55-31Do not forget Apply changes and Restart IE’s before. Here the Logging Granularity has been set to 10 minutes what is visible in the report below
2017-10-21_11-04-43

Example 6 – SQL or Access Period as main entity?

The SQL entity as a main entity is used only when information about construct body is needed.  If we focus on quantitative analysis and interesting in the user behave the Access Period domain is much more efficient. Access Period is also the timestamp reference entity for very useful entities like Command and Object (I will focus on them in the next article about Guardium reporting)

Example 7 – How to see all SQL’s

It is common situation that Guardium policy mixes LOG ONLY and LOG FULL DETAILS actions inside rules. So only part of activity can be reported using FULL SQL entity. We do not need create complicated reports to summarize and analyze this diversified type of auditing because each fully monitored SQL is also stored inside SQL entity.
2017-10-21_11-28-35It should be now clear that FULL SQL refers to Access Period using Instance ID key and indirectly we can identify executed Construct ID.

Please remember that any audited activity in Guardium is always visible inside Access Period and SQL entities.
Any quantitative analysis should relies on them especially when not only LOG FULL DETAILS action is used inside policy rules.
The report data extraction works much more efficient if we use Access Period instead of heavy queries on FULL SQL entity.

 

Guardium Reports Platform understanding (1)

Part I – timestamps, main entity and entity relations in Access domain

I receive many questions about the correct report definition tied with misunderstanding the data relations in audit database and main entity selection.

The simplest answer is: the main entity defines relation between all entities inside Guardium query but I think that is not still clear for most readers 🙂

The Guardium query presents values from reporting domain. Please remember that query can refer only to one domain (for instance Access, Exception). If report should present data from more domains the Guardium allows do that using custom domain (it is not important here).

Inside domain the audited data are stored in fields which are grouped inside entities.

For example, domain Access contains 19 entities, each of them can contain dozens fields. Simplifying, we can imagine this structure as the fields which are the columns in the tables (entities) which are the part of tablespace (domain) and query can refer to one tablespace only. This comparison is very accurate because we have strict relations between entities: 1-1, 1-N, or N-1.

Example 1 – Timestamps and 3 main entity relations

Please assume that the Guardium policy audits SQL activity using “LOG FULL DETAILS” action in all examples here.

I created a new query in the Access domain with the main entity set to FULL SQL
2017-10-05_16-49-29Report defintion

Then I connected 3 times to postgres database and executed simple “select now()” command
2017-10-05_16-39-44Here we have the report points this activity based on my query (filtered by user name)2017-10-05_16-43-17I put in my report four timestamps – from Access, Session (Timestamp and Session Start) and FULL SQL entities. You can notice that Timestamp from Access domain has this same value not strictly related to execution time of my SQL’s. What does exactly points this value?
To understand this we need to treat the Access domain as a dictionary (referential data) of tuples which include information about Client IP, Server IP and DB user name where Timestamp inside points date when the particular tuple has been registered (appears first time) on the appliance. So, if I am focusing on SQL’s activity this timestamp has no value for me because does not point any information about connection time or sql execution. The “foreign key” matching other entities with Client/Server has name Access Id.
2017-10-05_17-45-24GN – Example 3 is the reports based on Client/Server entity and exemplifies connection with Session entity.

This same relation exists between Session and FULL SQL entities based on “foreign key” – Session ID
2017-10-05_16-43-17-2

Finally we can present the relations inside 3 main entities this way
gn1If the new connection (session) is started the information about it is registered as a new record in Session entity. The access related information (IP addresses, user name, port, etc.) are referred from Client/Server (Access ID) and each new SQL from session stream is stored in FULL SQL with reference to Session by Session ID.

Now it should be clear that Timestamp from Full SQL is related to exact time when SQL was processed by database (my NTP configuration works well 😉 )
2017-10-05_20-40-54What about Timestamp is Session entity? Hmmm, even Guardium documentation suggests do not focus on it – “When tracking Session information, you will probably be more interested in the Session Start and Session End attributes that Timestamp attribute“.
I agree with that and suggest use the Session Start and Session End which point information about connection beginning and closure respectively.

Timestamps summary:

  • Timestamp from Client/Server entity – not related to session and SQL, refers to first appearance access description on Guardium appliance
  • Timestamp from Session entity – changeable during session lifetime, limited value to defined exact time of particular activity in the session
  • Session Start from Session entity – points when session started
  • Session End from Session entity – points when/if session was closed
  • Timestamp from FULL SQL entity – points when SQL has been executed

Example 2 – main entity selection

Now I have created the query with main entity – Client/Server
2017-10-05_21-41-22It means – that report will show data from Client/Server entity and any fields from other entities will work in the appropriate relation. You know that relations between Client/Server->Session and Client/Server->FULL SQL (indirect) are 1:N so we cannot present values and counter of events is suggested for Session Id and FULL SQL fields.2017-10-05_21-49-14Looks good but we will face two problems at once 🙂
This report suggests that shown Access ID tuple has been referred in 4 sessions and only two SQL’s are related to them – it is strange, how is it possible? Technically it is, but not here
2017-10-05_21-44-22The fact is that this tuple is related to 2 sessions and four SQL’s – opposite to shown values in my report!?

We received this output because my report counted values from two, external entities (Session and FULL SQL) and there is no direct relation between Client/Server and FULL SQL. The indirect relation was counted first with DISTINCT clause (value 2) and then sessions were summarized without DISTINCT (value 4).
This kind of problems are common situation if we do not understand the relations between entities.
I modified my query and removed FULL SQL counter2017-10-05_22-19-36and now the report shows correct number of sessions.

Now we can switch to the second challenge. Most of you have probably wondered why in most reports there is a time-based data filter
2017-10-05_22-27-31And now everything should be clear :). The time selection bases on the main entity Timestamp and appears if the entity contains it.
I noticed before that Timestamp in Access/Client entity points to information when Access tupple has been registered in the audit database on the appliance (not relevant in 99% situations). So, if I am looking for number of sessions in last hour the result set in my reports will look like this
2017-10-05_22-44-37because the tuple was created much more earlier.
It is the effect when we selected the incorrect main entity for our purpose. So, if we would like to display session (connection) related information our query should relies on this entity.
However, it was mentioned also, the Timestamp in Session entity is not valuable – is changing during session and does not provide well define point of time in the session lifetime. That is why the Guardium provides two virtual main entities in the Access domain
2017-10-05_22-54-27corresponding to Session Start and Session End inside session entity. Now I can create report which will count sessions in defined time based on Session Start timestamp
2017-10-06_15-31-32My query does not contain any field from session entity because my goal is to count sessions, so there is no sense put session details inside. I added sessions counter using the Add Count flag which indirectly adds a fourth field (Count of Sessions)
2017-10-05_23-02-16You see that my report based on the session entity lists all sessions started last hour (left) and report based on Client/Server entity is empty because referential data (Access ID) are stored earlier when particular connection information was identified first time (right).

My examples can lead to opinion that queries based on Client/Server entity are not valuable at all. Definitely there are usable but for well defined cases. For instance we would like to identify new connection profiles on database system – new tuples which were never before connected to our system. Base on this information we can identify anomaly in the access to protected resource –  new database clients have not seen before.

Client/Server, Session and Full SQL entities are base of most reports focused on detailed SQL activity. I hope that this article allows you create the requested query faster and deliver expected results.


In next article about reporting I will explain the difference between FULL SQL and SQL entity

DAM in GDPR context

The rumor about GDPR provides to situation that customers receive messages that all existing security solution have “something” for that :). It is good sale strategy but definitely painful tactics for Security Officers with limited budget and hard nut to consume before 25 May, 2018.

Here I would like to review GDPR requirements (AS-IS, because still the European Data Protection Council did not provide certification guideline) from DAM perspective and review the most popular questions tied with DAM in the GDPR context.

Where DAM cover GDPR requirements?

  • Article 5.1(f) – Data protection principles assumes protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures
    DAM is dedicated solution to monitor SQL stream – granular policies can narrow access only to accepted vectors, behavioral analysis identifies anomalies, prevention rules block suspicious activity,  SQL analysis dynamically masks data and even stops execution of dangerous commands.
    Administrative fines and possible civil actions should change the PI administrators approach and consider the manual organizational measures as not sufficient.
  • Article 5.2 – Data protection demonstration based on manual processes is not efficient and sufficient.
    Only proactive or automatically reacting on correlated events solutions can cover GDPR requirements. DAM in addition to the SQL logging provides information about activity context (who, when, what), strong reporting capabilities to review analyzed incident quickly, policies which identify PI’s processing, quantitative analysis simplifies abnormal behave and self-learning engine discovering anomalies in standard access to monitored system.
    DAM blocking capabilities are unique to provide full control on privileged accounts and implement access control covering the segregation of duties demand.
  • Article 9 – Processing of special categories of personal data
    Sensitive personal data (racial or ethnic origins, political opinions, religious beliefs, genetic and biometric data and health condition or sexual orientation) included in PI’s administrator databases change strength of GDPR requirements in 2 places:

    • Article 30.5 – even company employs less that 250 workers, PI’s processing has to be recorded
    • Article 83.5(a) – administrative fines related to lack of compliance on silo with sensitive information are doubled

DAM data classification engine can identify sensitive information with minimum number of false positive results based on catalog, regular expression, dictionary or custom searches. Achieved results allow to focus on the most critical assets from GDPR perspective.
Classification and database discovery processes executed on schedule rapidly identify changes inside database schema and network assets.
Awareness where sensitive data are located is crucial to confirm efficiency of working processes for data pseudonymization and minimization.
Data lake monitoring can be implemented only in the largest corporation, the knowledge what should be protected is the first step before we spend the limited budget.

  • Article 24.1 and 24.2 – Data administrator duties
    These two articles impose a data protection obligation on the data controller as an auditable and controlled process. If we consider databases, data-warehouses, big data and file repositories the DAM was exactly created for this.
  • Article 28 – Data processor duties
    According to data processing on behalf of data controller (very common situation) the processor must guarantee that the access to PI’s takes place on written administrator authorization. Only data access monitoring can provide real access registry.
  • Article 30 – Records of processing activities – puts the requirement of the personal information access accountability
    Small companies will implement this goal by creating simple registry, based on manual data access description, sometimes enriched by approval workflow.
    However the low cost solution is tied with complexity of reporting and lack of non-repudiated registry so you should be considered better mechanism to register access to GDPR protected data.
  • Article 32.1(d) – Security of processing points vulnerability assessment and system hardening
    Popular platforms dealing with vulnerability assessment treat the relational databases harshly. DAM originated from RDBMS world provide rich checks and not only focus on CVE’s and standards (CIS, STIG). Based on years of experience it includes also analysis of SQL traffic, influence the configuration changes on the risk score, authorization snapshots and excessive rights identification.
    For most critical systems the DAM extension to existing VA solution in your environment can be very helpful.
  • Article 33.3(a) – Data breach notification imposes on the subject not only the requirement for immediate notification (3 days).
    Breach notification should contain information about scale of the leakage or other type of incident. Only DAM solutions can identify this scope (SQL audit) and minimize damages related with data owners notification and possible fines.
    Be aware that:

    • DLP’s (agent and network) covers only data on workstation and remote acceses. What about local session on servers, are you sure that your DLP provides this same SQL structure and session context analysis as specialized to this purpose DAM solutions?
    • PIM’s monitor access of privileged users to production systems. They are not aware of SQL syntax and session context. PIM should be considered in GDPR compliance program but the real value is visible when DAM and PIM are integrated together (directly or on SIEM level).
  • Article 34 – Communication of a personal data breach to the data subject
    Technically DAM solutions are able to parse output of SELECT’s but usability of this functionality is limited. The size of outgoing stream is unpredictable and can lead to situation that monitoring system should have more hardware resources that monitored one (especially on data-warehouse).
    However DAM can provide list of SQL instructions executed inside suspicious session and simplify the recognition of the attack range. In case of data modification (DML’s) audited SQL activity can directly identify changes and required remediation.

Does DAM provide protection for applications in GDPR context?

The 3-tier architecture of most applications (web client, application server, data store) anonymizes access to data on silo level. So we cannot identify application user on the SQL level basis only on the database user name which points the account from the pool of connections. However DAM can be configured to extract this information from SQL, JDBC encapsulation message, Web Server logs and other streams. In most cases this kind of integration requires additional implementation effort including in the worst case the application code change.
So, if the application user context is visible on DAM level we can utilize exactly it this same way like described earlier with two objections:

  • Never kill the session in the pool of connection because content of SQL stream inside belongs to many application users. Killed session will raise exceptions on application layer and reinitialize application session for thousands clients.
  • Never mask data or rewrite SQL in the pool of connection. Masked data in most cases will have inappropriate format and will lead to application exceptions. Even the masked data will have accepted format (data tokenization) the information receiver will not have idea about this fact and can made business or law decisions based on incorrect information – data masking for application should be implemented on application or presentation layer.
    The rewritten SQL inside SQL transaction can change it essence and leads to lost of data consistency.

DAM without application user context is still valuable in this stream to identify anomalies, errors, behavioral fluctuations using quantitative analysis.

Can DAM implement data pseudonymization?

Hmm, we should start from basic question – what pseudonymization is?
I saw many web articles which directly equals this word with data masking but I disagree with this approach.

GDPR defines pseudonymization as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.

I treat this definition as a consequence and continuation of the data minimization process. Briefly, if PI’s data will be separated from transactional ones (data minimization) the natural relation between these two stores (for example customerID) should be utilized in whole data process flow. Only on demand and with approval the customerID can be translated to form which identify the person.
Can DAM help here? – NOPE.
However the implementation of data minimization and pseudonymization for existing systems is tied with complete application redevelopment – who can afford it? So, only new GDPR-ready applications will come with this kind of functionality on board.

For existing systems we try to avoid the personal information identification using data masking and here DAM can be also helpful:

  • preproduction (test) data – why DAM instead of data tokenization?
  • access outside application stream:
    • masking of SELECT output – most DAM’s provides this functionality but efficiency is the main problem
    • query rewrite – very suitable and provides possibility to tokenize and encrypt data instead simple masking
  • access from application stream – what I mentioned earlier, application masking should be implemented on application or presentation layer only

Member states implementation of GDPR

GDPR is the regulation and unifies law in the European Union but in few regulation articles we can find some derogations. The good example is Article 9.4 where health records can be managed different way according to member state decision. Does it mean that my decision about scope and type of protection should be postponed until parliament implementation of the law?
Definitely, you should not wait because your data may contain personal information about citizens from another EU country and you can be sued based on his state law.

DAM and “right to be forgotten”

It is common question raised during DAM discussions.
The Article 17 introduces subject right to remove its personal information on request. DAM is monitoring solution and does not cooperate directly with DB engine (to cover SoD requirement) and has no authorization to modify data. So this simple explanation leads to only one correct answer on the titled question – DAM is not component which can be useful in case of implementation the citizen right to be forgotten.
By the way, who will agree with the data removal on system with thousands relations what can lead to lose data consistency which can be discovered one year later? I think that only new systems with fully implemented data minimization and psedonymization principles will be able to identify this right easy way. If all personal information are separated from transactions, the PI’s removal or simple encryption will provide suitable solution without any additional effort.

Administrative fines mantra

Have you seen any GDPR related article without remark about “huge fines up to 20 billions euro or 4% company turnover”?
Do you believe that your government will decide to kill local, average or small companies because of GDPR?
If your answers are negative you should consider much more interesting case. In the Article 82 the GDPR introduces citizen right to compensation with a body of appeal attached directly to EU Council. Many organizations seriously consider the costs of civil actions and its possible influence on business.

New type of ransomware
Standard ransomware based on data encryption is not efficient because victims pay rarely (privates are not able to pay large amount of money, backup exists, block of bitcoin account).
With GDPR the stolen data can be a simple way to force ransom from an organization wishing to avoid penalties and massive civil actions.
I think that data gathered actually from unaware companies are stored somewhere in the darknet to be starting package for new type of “business” next year. 😦

Summary:

DAM definitely should be considered as the important element of any GPDR compliance program because of:

  • PI processing monitoring
  • data classification
  • data masking
  • unauthorized data access protection
  • vulnerability assessment

and achieves the best value when it is integrated with PIM, IAM, Encryption and SIEM.

 

GIM video guideline

This video covers Guardium Installation Manager installation, configuration and administration.

Chapters timeline:

  1. Introduction 0’00”
  2. GIM installation 3’14”
  3. GIM self-upgrade 7’36”
  4. GIM deinstallation 12’07”
  5. GIM failover configuration 13’42”
  6. GIM deinstallation from data node 17’09”
  7. GIM in listener mode 18’14”
  8. GIM listener discovery and group activation 20’41”
  9. GIM reconfiguration 23’46”
  10. GIM report and GRDAPI calls 27’19”
  11. GIM Authentication 34’12”
  12. Installation with auto_set_tapip 42’35”
  13. Modules management 44’03”
  14. GIM on Windows 47’30”
  15. GIM troubleshooting  – network problems 53’52”
  16. GIM troubleshooting – GIM restart 54’37”
  17. GIM troubleshooting – configuration file modification 55’12”
  18. GIM troubleshooting – central log 57’03”
  19. GIM troubleshooting – managing standalone STAP installation by GIM 59’13”
  20. GIM troubleshooting – global parameters 63’00”
  21. GIM troubleshooting – process respawn 64’19”
  22. GIM troubleshooting – IP-PR status 66’26”
  23. Dynamic groups in GIM – 67’45”

Link: https://youtu.be/OSJnIXO-Kew

Summary:

GIM is very useful service. Eases Guardium implementation, administration and has effect of lowering TCO. It is implemented secure way in Client-Server architecture.

Some portal places wait for rebuilding to use new framework – module parameters settings especially.

Resources:

Guardium definitions – GIM reports (Clients Status and Installed Modules) with assigned GRDAPI functions and mapped attributes, GIM Dashboard (All 4 important reports together, refer to reports attached as first position)

Network ports list used in the Guardium communication – http://www.ibm.com/support/docview.wss?uid=swg21973188

GIM module states (Querying module states) – http://www.ibm.com/support/knowledgecenter/SSMPHH_10.1.0/com.ibm.guardium.doc.stap/gim/gim_cli.html

Using sudo during GIM installation – http://www-01.ibm.com/support/docview.wss?uid=swg21984662

Agent convention naming – http://www-01.ibm.com/support/docview.wss?uid=swg21698858

Operating system upgrade – http://www-01.ibm.com/support/docview.wss?uid=swg21679002

How To Install GIM Client On Unix Server? – http://www-01.ibm.com/support/docview.wss?uid=swg21991742

Uninstall Guardium UNIX S-TAP and GIM manually – http://www-01.ibm.com/support/docview.wss?uid=swg21982923

GIM Server Allocation – http://www.ibm.com/support/docview.wss?uid=swg27049424

Limitations:

GIM is not available on z/OS (zLinux is supported) and iSeries (aka AS/400).

Hidden reporting domains, no possibility to modify reports and create alerts based on it.

Remarks:

USING SUDO

I am using sudo to install GIM – the sudoers file configuration is not a part of Guardium.

MODULE UNINSTALLATION BY GRDAPI

We have 2 GRDAPI commands responsible for module uninstallation:

gim_uninstall_module – allows remove pointed module in define date on clientIP

If date is omitted the module is set for deinstallation but second command can initiate it – gim_schedule_uninstall – in defined date

Add-On’s

It is also possible the installation of STAP from command line with self-registration in GIM service. This article describes it and assumes that GIM client has been installed and registered before – http://www.ibm.com/support/docview.wss?uid=swg21998933

 

 

Appliance installation and configuration video guideline

This video contains set of appliance (collector, aggregator) installation scenarios and covers Guardium configuration in standalone and enterprise architecture.

I would not like to split it to many small parts so the specific tasks are pointed below with time:

  • Introduction – 0’00”
  • VM Template – 2’47”
  • Simple collector installation – 4’38”
  • Installer boot options – 8’05”
  • Appliance with software disk encryption – 9’36”
  • Appliance with software RAID – 12’20”
  • Simple aggregator installation – 15’44”
  • Basic network configuration – 16’38”
  • Time and timezone configuration – 20’03”
  • Hostname and domainname setup – 21’50”
  • VMWare tools installation – 22’58”
  • License installation in standalone configuration – 24’41”
  • Personal administration account creation – 28’20”
  • Manual appliance patching – 30’49”
  • Central Manager configuration – 40’22”
  • License installation on CM – 41’50”
  • CM backup configuration – 43’48”
  • Shared Secret – 45’31”
  • Unit registration – 46’36”
  • Remote patching from Central Manager – 48’11”
  • Summary – 52’18”

If are you looking for guidelines in other areas leave me message.

Direct link: https://youtu.be/dU_PDZ2g9mg

Appendix:

  • On hardware appliances (delivered by IBM) the default passwords are changed from “guardium” to mentioned in this technote (added 17-01-2017)
  • The largest disk space manageable by appliance is 16 TB (added 27-01-2017)