Quantcast
Channel: Symantec Connect - Products - Articles
Viewing all 694 articles
Browse latest View live

Information Centric Analytics Best Practices - Risk Vectors

$
0
0

Creating and Configuring Risk Vectors and Risk Scores

This article deals with how Symantec Information Centric Analytics (ICA) incorporates Risk Vectors to improve and display Risk Scores.   In ICA, Risk vectors compare activities, events and incidents to similar activities, events and incidents.  Risk vectors are used to calculate risk scores, and are defined for applications, computer endpoints, IP addresses, persons, and users.  For example, person risk vectors compare a person's activities, events or incidents to the person's usual activities, other peers in the same department, and peers with the same manager to determine the person's risk level.

A Risk Weight is specified to allow certain vectors to contribute more to a risk score. For example, a failed authentication risk vector may have a weight of 5, and a successful authentication risk vector have a weight of 1. When computing the risk score, the failed authentication provides a larger contribution to the score than the successful authentication.

Creating Risk Vectors Using Analyzer

To create a risk vector based on a cell in the Analyzer, do the following:

1.Navigate to the Analyzer

2.Create a view or open an existing view.

3. Right-click the cell that has the data you want to use for the risk vector.

4. Click Create Risk Vector.

5. Select the entity type.

6. Enter a name for the risk vector.

7. Click the Enabled check box to enable the risk vector. Enabling the risk vector allows the risk vector to be used in risk calculations.

8. Click the Displayed check box to display the risk vector on the radar graphs on entity details pages.

9. Enter a weight for the risk vector. The risk weight allows certain vectors to contribute more to a risk score.

10. Click Save to save the risk vector.

Configuring Risk Scoring Settings

Risk scoring settings include configuration options for displaying vectors and setting risk ratings. The vectors and ratings appear on the Risk Level tab of the individual application pages in the Assets portal.

Configuring Application Risk Scoring Settings

Application risk scoring settings include configuration options for displaying vectors and setting risk ratings. The vectors and ratings appear on the Risk Level tab of the individual application pages in the Assets portal.

To configure application risk scoring settings, do the following:

1. In the ICA administration portal, select Settings, and then select General Settings.

2. Go to the Application Risk Scoring settings section.

3. Configure the settings as needed. The following table lists the configuration options. The thresholds trigger a notification based on the previous day's events.

Setting

Description

Display the vector scores sorted by ordinal, true, or false to be sorted by application's vector scores

Enables the sorting and display of risk vector scores.

Enable Application Risk Score Calculation

Enables calculation of risk scores for applications.

Include the Unrated applications as part of the percentage of low

Enables the inclusion of unrated applications counted in the percentage of low-risk applications.

Literal threshold (inclusive) for Critical risk ratings

Sets the raw risk score for applications to be considered critical risk.

Literal threshold (inclusive) for High risk ratings

Sets the raw risk score for applications to be considered high risk.

Literal threshold (inclusive) for Medium for risk ratings

Sets the raw risk score for applications to be considered medium risk.

Number of days back to use in calculating application risk score ratings

Sets the number of days used to calculate application risk score ratings.

Number of desired Critical application risk score ratings

Sets the number of applications considered critical.

In a company with 100 computers, the number may be 10, and in a company with 20,000 computers, the number may be 50.

Percentage of desired High application risk score ratings

Defines the percentage for the high category for the application risk score. The default is the top 2 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Percentage of desired Low application risk score ratings

Defines the percentage for the low category for the application risk score. The default is the bottom 66 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Suppress vectors whose values for application, peers, and organization are all zero.

Disables the vectors from being displayed when the computer endpoints have a value of zero.

The maximum number of vectors to be displayed in the vector graph

Sets the maximum number of risk vectors to display on the vector graph. Enter 0 to display all risk vectors with a score greater than zero.

The minimum number of vectors to be displayed in the vector graph

Sets the minimum number of risk vectors to display on the vector graph.

Use the literal threshold to assign risk ratings

Enables the use of the literal threshold for risk ratings.

 

Configuring Computer Endpoint Risk Scoring Settings

Computer endpoint risk scoring settings include configuration options for displaying vectors and setting risk ratings. The vectors and ratings appear on the Risk Level tab of the individual a computer endpoint pages in the Assets portal.

To configure computer endpoint risk scoring settings, do the following:

1. In the ICA administration portal, select Settings, and then select General Settings.

2. Go to the Computer Endpoint Risk Scoring settings section.

3. Configure the settings as needed. The following table lists the configuration options. The thresholds trigger a notification based on the previous day's events.

Setting

Description

Display the vector scores sorted by ordinal, true, or false to be sorted by computer endpoint’s vector scores

Enables the sorting and display of computer endpoints risk vector scores.

Enable Computer Endpoint Risk Score Calculation

Enables calculation of risk scores for computer endpoints.

Include the Unrated computer endpoints as part of the percentage of low

Enables the inclusion of unrated computer endpoints counted in the percentage of low-risk computer endpoints.

Literal threshold (inclusive) for Critical risk ratings

Sets the raw risk score for computer endpoints to be considered critical risk.

Literal threshold (inclusive) for High risk ratings

Sets the raw risk score for computer endpoints to be considered high risk.

Literal threshold (inclusive) for Medium for risk ratings

Sets the raw risk score for computer endpoints to be considered medium risk.

Number of days back to use in calculating computer endpoints risk score ratings

Sets the number of days used to calculate computer endpoints risk score ratings.

Number of desired Critical computer endpoints risk score ratings

Sets the number of computer endpoints considered critical.

In a company with 100 computers, the number may be 10, and in a company with 20,000 computers, the number may be 50.

Percentage of desired High computer endpoints risk score ratings

Defines the percentage for the high category for the computer endpoints risk score. The default is the top 2 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Percentage of desired Low computer endpoints risk score ratings

Defines the percentage for the low category for the computer endpoints risk score. The default is the bottom 66 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Suppress vectors whose values for computer endpoints, peers, and organization are all zero.

Disables the vectors from being displayed when the computer endpoints have a value of zero.

The maximum number of vectors to be displayed in the vector graph

Sets the maximum number of risk vectors to display on the vector graph. Enter 0 to display all risk vectors with a score greater than zero.

The minimum number of vectors to be displayed in the vector graph

Sets the minimum number of risk vectors to display on the vector graph.

Use the literal threshold to assign risk ratings

Enables the use of the literal threshold for risk ratings.

Configuring IP Risk Scoring Settings

IP risk scoring settings include configuration options for displaying vectors and setting risk ratings. The vectors and ratings appear on the Risk Level tab of the individual IP address pages in the Assets portal.

To configure IP risk scoring settings, do the following:

1. In the ICA administration portal, select Settings, and then select General Settings.

2. Go to the IP Risk Scoring settings section.

3. Configure the settings as needed. The following table lists the configuration options. The thresholds trigger a notification based on the previous day's events.

Setting

Description

Display the vector scores sorted by ordinal, true, or false to be sorted by IP’s vector scores

Enables the sorting and display of IP addresses risk vector scores.

Enable IP Risk Score Calculation

Enables calculation of risk scores for IP addresses.

Include the Unrated IP addresses as part of the percentage of low

Enables the inclusion of unrated IP addresses counted in the percentage of low-risk IP addresses.

Literal threshold (inclusive) for Critical risk ratings

Sets the raw risk score for IP addresses to be considered critical risk.

Literal threshold (inclusive) for High risk ratings

Sets the raw risk score for IP addresses to be considered high risk.

Literal threshold (inclusive) for Medium for risk ratings

Sets the raw risk score for IP addresses to be considered medium risk.

Number of days back to use in calculating IP addresses risk score ratings

Sets the number of days used to calculate IP addresses risk score ratings.

Number of desired Critical IP addresses risk score ratings

Sets the number of IP addresses considered critical.

In a company with 100 computers, the number may be 10, and in a company with 20,000 computers, the number may be 50.

Percentage of desired High IP addresses risk score ratings

Defines the percentage for the high category for the IP addresses risk score. The default is the top 2 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Percentage of desired Low IP addresses risk score ratings

Defines the percentage for the low category for the IP addresses risk score. The default is the bottom 66 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Suppress vectors whose values for IP addresses, peers, and organization are all zero.

Disables the vectors from being displayed when the IP addresses have a value of zero.

The maximum number of vectors to be displayed in the vector graph

Sets the maximum number of risk vectors to display on the vector graph. Enter 0 to display all risk vectors with a score greater than zero.

The minimum number of vectors to be displayed in the vector graph

Sets the minimum number of risk vectors to display on the vector graph.

Use the literal threshold to assign risk ratings

Enables the use of the literal threshold for risk ratings.

Configuring Person Risk Scoring Settings

Person risk scoring settings include configuration options for the high and low risk scores, and rating options. The vectors and ratings appear on the Risk Level tab of the individual person pages in the Identities portal.

To configure person risk scoring settings, do the following:

1. In the ICA administration portal, select Settings, and then select General Settings.

2. Go to the Person Risk Scoring settings section.

3. Configure the settings as needed. The following table lists the configuration options. The thresholds trigger a notification based on the previous day's events.

Setting

Description

Display the vector scores sorted by ordinal, true, or false to be sorted by Person’s vector scores

Enables the sorting and display of Person’s risk vector scores.

Enable Person Risk Score Calculation

Enables calculation of risk scores for persons.

Include the Unrated Person’s as part of the percentage of low

Enables the inclusion of unrated Person’s counted in the percentage of low-risk Person’s.

Literal threshold (inclusive) for Critical risk ratings

Sets the raw risk score for people to be considered critical risk

Literal threshold (inclusive) for High risk ratings

Sets the raw risk score for people to be considered high risk.

Literal threshold (inclusive) for Medium for risk ratings

Sets the raw risk score for people to be considered medium risk.

Number of days back to use in calculating Person’s risk score ratings

Sets the number of days used to calculate Person’s risk score ratings.

Number of desired Critical Person’s risk score ratings

Sets the number of Person’s considered critical.

In a company with 100 computers, the number may be 10, and in a company with 20,000 computers, the number may be 50.

Percentage of desired High Person’s risk score ratings

Defines the percentage for the high category for the Person’s risk score. The default is the top 2 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Percentage of desired Low Person’s risk score ratings

Defines the percentage for the low category for the Person’s risk score. The default is the bottom 66 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Suppress vectors whose values for Person’s, peers, and organization are all zero.

Disables the vectors from being displayed when the Person has a value of zero.

The maximum number of vectors to be displayed in the vector graph

Sets the maximum number of risk vectors to display on the vector graph. Enter 0 to display all risk vectors with a score greater than zero.

The minimum number of vectors to be displayed in the vector graph

Sets the minimum number of risk vectors to display on the vector graph.

Use the literal threshold to assign risk ratings

Enables the use of the literal threshold for risk ratings.

Configuring User Risk Scoring Settings

User risk scoring settings include configuration options for the high and low risk scores, and rating options. The vectors and ratings appear on the Risk Level tab of the individual user pages in the Identities portal.

To configure user risk scoring settings, do the following:

1. In the ICA administration portal, select Settings, and then select General Settings.

2. Go to the Person Risk Scoring settings section.

3. Configure the settings as needed. The following table lists the configuration options. The thresholds trigger a notification based on the previous day's events.

Setting

Description

Display the vector scores sorted by ordinal, true, or false to be sorted by User’s vector scores

Enables the sorting and display of User’s risk vector scores.

Enable User Risk Score Calculation

Enables calculation of risk scores for users.

Include the Unrated User’s as part of the percentage of low

Enables the inclusion of unrated User’s counted in the percentage of low-risk User’s.

Literal threshold (inclusive) for Critical risk ratings

Sets the raw risk score for users to be considered critical risk.

Literal threshold (inclusive) for High risk ratings

Sets the raw risk score for users to be considered high risk.

Literal threshold (inclusive) for Medium risk ratings

Sets the raw risk score for users to be considered medium risk.

Number of days back to use in calculating User’s risk score ratings

Sets the number of days used to calculate User’s risk score ratings.

Number of desired Critical User’s risk score ratings

Sets the number of User’s considered critical.

In a company with 100 computers, the number may be 10, and in a company with 20,000 computers, the number may be 50.

Percentage of desired High User’s risk score ratings

Defines the percentage for the high category for the User’s risk score. The default is the top 2 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Percentage of desired Low User’s risk score ratings

Defines the percentage for the low category for the User’s risk score. The default is the bottom 66 percent.

NOTE: Administrators can add, delete and change the vectors used for the risk score.

Suppress vectors whose values for User’s, peers, and organization are all zero.

Disables the vectors from being displayed when the User has a value of zero.

The maximum number of vectors to be displayed in the vector graph

Sets the maximum number of risk vectors to display on the vector graph. Enter 0 to display all risk vectors with a score greater than zero.

The minimum number of vectors to be displayed in the vector graph

Sets the minimum number of risk vectors to display on the vector graph.

Use the literal threshold to assign risk ratings

Enables the use of the literal threshold for risk ratings.

For more best practice articles on Symantec Information Centric Analytics see the following posts:


Information Centric Analytics Best Practices - Using the Integration Wizard

$
0
0

Integration Wizard Best Practices

With the Integration Wizard in Symantec Information Centric Analytics, users have the flexibility to import almost anything and everything from different data source. Though that flexibility has tons of benefits, this also poses as a risk for an ICA implementation. Importing too much data may cause all sorts of problems the most obvious of which is filling up the ICA database uneccesarily. On the other hand, importing too little may cause the risk analysis results that ICA produces to be less accurate. This article explains what and how much data should be imported and other related items.

What to import

While it is obvious that data being imported should have a purpose in ICA, it is not the case when choosing the specific kind of data. There are a few questions that you should be asking yourself to help answer the question of “Should I import this data?”

  • Can I map this data to an ICA entity?
  • Does the data add value to determining the risk of entities in ICA?
  • Does the data contain elements that can associate it to a source and/or destination?

The questions above are the first step in considering which pieces of data to import. Once those questions have been answered, you can proceed with the specifics for each type of data. Below are some guidelines and best practices that we have collected from the various implementations we have done.  Note that the information below is more useful for when you have to configure a custom integration wizard import and may not apply to imports done through out-of-the-box integration packs.

Computer Endpoints, IPs, Users/Persons/Organizations, Applications

With any of these base entities in ICA, always consider going to an authoritative data source. For example, for computer endpoints and users/persons/organizations, a good candidate for the source for importing these entities include Active Directory, asset management databases or CMDBs as well as HR databases for users/persons and organizations, most especially, organization hierarchy.

Authentication Events

IMPORTANT NOTE: All authentication events, failed and successful, may contribute to an entity’s risk rating.  However, capturing all authentication data, most especially if you have several sources to pull from, may cause excessive data import.  If authentication data imported needs to be limited, start by filtering out generic local user accounts that may generate a lot of noise. Next, look for events that may be authentication-related but not necessarily login successes and failures.  If you need to trim the authentication events further and have several different sources and authentication types, consider limiting the events to only ones coming from Windows and/or Unix and Linux systems.

Endpoint Protection Events

For endpoint protection events, make sure that the data imported relates to detections, infections and the like.  Administrative events such as virus definition updates, etc. should be excluded.

DLP Incidents

DLP incidents data generally include violations only or anything related to something that triggered an incident to be generated by a DLP solution.  With that, there is not much filtering that can be done to DLP incident data without losing information significant to the risk assessment and scoring function of ICA.  However, if there are any test incidents and incidents related to policy testing, those would be good candidates to exclude from DLP data imports.

Web Activity Events

IMPORTANT NOTE: As with authentication events, limiting incoming data is key. As a rule of thumb, confine the data to only include blocked web activities and web activity that exceed 10 MB for outgoing traffic (bytes out) regardless of the action taken, i.e. permitted or blocked.

Integration Wizard Recommendations

1. Review the column names from the integration mapping entity prior to developing the Data Source Query for the Integration.  This will give you a better concept of what data the integration entity will accept for the integration mapping.

2. When creating the data source query, alias the column names within the data source query to align column names from the data source to the column name defined for the entity ID.  When Column names are aliased, the Integration Wizard automatically associate the source to the target using the name and it eliminate the need to manually map the source to target columns.  For example if you are loading computer endpoints and your source column is called AssetTag you should alias the column name to SourceComputerKey.

3. In order to prevent data type errors from occurring during nightly processing, use cast statements in your SQL to ensure there are no data type conflicts and to ensure that you minimize the risk of data size conflicts when loading data into ICA.

  • Sample Cast Statement that will cast the column SamAccountName to an NVARCHAR 256 field and alias the column to AccountName.  This will ensure the data selected is capped at 256 characters and we are using the alias AccountName so the column will be automated when the query references an integration mapping. 
    • CAST ([sAMAccountName] as NVARCHAR(256)) as  AccountName 

4. There are a number of formulas that are shipped out of the box that can be used to supplement your integration the most common one used is converting EPOCH time to SQL server time when building a Splunk based integration. 

5. When Creating a formula, enclose the variable in ‘{ }’.  Doing so will allow you to specify a column value to pass during run time.  Formulas are applied when the data is going into the Stg_Preprocess tables and not the staging table used when extracting data out of the source into staging. 

6. When defining the integration mapping you can specify a fixed value, a source column from a query or you can use a pre-existing formula / create a new custom formula. 

Recommended IW Entities Data Load Order into ICA

1. Organizations

  • When loading Organizations there are three required fields for the Organizations entity for the IW. 
  • Organization Abbreviation – This is a free form text field that is defined to serve as the abbreviation of the organization.   The data type for Organization Abbreviation is nvarchar(10).
  • Organization Name – is a free form field that allows you to specify an Organization Name.
  • Organization SubOrgName – is a free form text field that allows you to specify a sub organization to the organization if one exists. 

2. Regions

  • Regions are associated to countries and a region can be associated to one or many countries.
  • Use a standardized listing of countries and regions if necessary to supplement incomplete country information.

3. Countries

  • A country can only be associated to one regions.
  • Countries are most commonly associated to Users and ComputerEndpoints.  They will also be associated to other entities like Authentication Events, Web Activity and DIM Incidents

4. Users

  • The Primary key for Users is Account Name and NetBIOS Domain.  In the event, you have the same account name and different domain for a userid, multiple user accounts will be created for the user.
  • When attempting to associate a user to another entity like computer endpoints, authentication events and DIM/DAR Incidents you will need to provide a combination of Account Name and NetBIOS Domain to link the user to the record.
  • Users will generate people records if the user contains email address and if the user contains a manager. 

5. Vendors

  • The primary key for a Vendor will be the Vendor Name.  Prior to uploading vendor name evaluate the data to ensure that the vendor is named in a consistent manner.  There could be inconsistencies in the way a vendor is named in the source system.
  • Vendors can be associated to many users the vendor information will be stored in an object entitled LDW_VendorsToUsers
  • Vendors can be categorized by Industry and they can be associated to Vectors and they can be assigned Vector grades.

6. Applications

  • The Primary Key column for Applications entity are Application Name and Source Application ID.
  • Users can be associated with an application via email address or you can create and associate users by providing by providing an owner account name and owner net bios domain. 
  • You can create and associate compliance scopes to an application and an application can be associated from one to many compliance scopes. 
  • Applications can also be associated to an application categories. 

7. Application Contacts

  • The primary key column for application contacts is the source application ID from the external source system.
  • To lookup users you will just need to provide an email address.  Optionally you can configure the IW to create and associate users through this feed by providing a Contact Account Name and a Contact NetBIOS Domain. 
  • When Creating application contacts, application contact roles can also be created by using the Application

8. ComputerEndpoints

  • The Primary Key columns for computer endpoint is the Computer Name and the Source Computer Key.  The Source Computer Key should serve as the primary key to a ComputerEndpoints in the source system.  The NETBIOS Domain can also be associated to a computer endpoint but it is classified as an optional field.
  • Applications can be associated to ComputerEndpoints and an Application Assignment Tier is also associated to the Computer Endpoint.
  • You can also associate a country to a computer endpoint using the country name and you can look-up and organization using the Organization Abbreviation or you can create and associate organizations by feeding an organization name from organization suborg name.

9. Authentication Events

  • For Windows authentication events, a good place to start is by filtering only to include the following security Event IDs: 528, 529, 530, 532, 533, 534, 535, 539, 540, 682, 4624, 4625, 4648, 4768, 4769, 4771, 4776
  • Consider excluding the following information:
  • Authentication coming from SYSTEM
  • Hostnames that end in $
  • Logon types 0 and 3
  • When loading authentication events, the success character is a required field, we should pass a value of 1 for Success and a value of 0 for unsuccessful. 
  • A watermark should be used when loading authentication events to ensure old authentication events are not reloaded. 
  • The Logon Type ID is not a required field but it is highly recommended that we include it when loading Authentication Events. 
  • To associate users to an authentication event you can provide an email address; to create and associate users to an authentication event you should provide an Account Name and NetBIOS Domain information.
  • When associating a computer endpoint it is a best practice to provide the destination hostname and the source hostname

10. DIM Incidents

  • The following fields are required when loading a DIM Incident
    • Incident Date
    • Match Count
    • Recipient Identifier
    • Sender Identifier
    • Source Incident ID
    • Source Policy ID
    • Source Policy Name
    • Source Rule ID
    • Source Rule Name
  • Users are associated to a DIM Incident by providing a Source Account Name and a Source Net Bios Domain.
  • Computer Endpoints are associated to a DIM Incident via Source Hostname.
  • Dim Incident Statuses and Severities can also be associated to a DIM Incident.

11. Endpoint Protection Events

  • The primary key columns for End Point protection events are the Event Date and the Source Event ID from the external system.
  • IP Addresses can be associated to end point events by providing a Destination IP Address and a Source IP Address.
  • Lookup and Associate Users by providing a Destination Email address and a Source Email address.  Alternatively, Users can be created and associated to EP Events by passing Destination Account Name & Destination Net BIOS Domain and the Source Account Name & Source Net BIOS Domain. 
  • Computer Endpoints can be associated by associating a Destination Host Name and a Source Host Name for the computer endpoint.
  • Security Risks can also be associated to EP Protection Events.

12. Web Activity Events

  • The primary key columns for loading Web Activities are the Activity Date, Source Activity ID and the URL for the web activity.
  • A Destination IP Address and Source IP Address can be associated to a web activity.
  • Lookup and Associate Users by providing a Source Email address.  Alternatively, users can be created and associated to Web Activities by passing `Source Account Name & Source Net BIOS Domain. 
  •  Web Activities can be categorized by providing a Category Name for the web activity entity.
  • Severities can be associated to a Web Activity by providing a Severity Name.
  • The action taken via a web activity can be tracked by providing the Action Taken and the Disposition.  After loading the action taken the information is stored in the object LDW_WebActivityActionTaken if you are providing new actions that are synonyms for Blocked Actions, you will be required update the action to have the ISBlocked=1 when uploading the Web Activity.

Analyzing the Data

When dealing with data imported from sources other than those that we have out-of-the-box integration packs, data may turn out to be unpredictable. Prior to importing data into ICA, more specifically, the logical data warehouse tables (LDWs), you should take the time to analyze the data. There are two primary things that you should determine when analyzing the data:

Are there rows of data that I can filter further?

Are there any fields that need to be manipulated?

If you do find data that requires manipulation, you have two main options:

  • Use a formula
    Formulas are good for short, non-complex data manipulations. For example, formulas are good for making sure that string data types stay within the allotted number of characters for the destination columns. To do that, simply use the LEFT function, e.g. LEFT({sourceColumn}, 256).
  • Use a secondary staging table
    To use a secondary staging table to do some data manipulation, you will have to define a second data source using a data source type of SQL Server IW with the database as the ICA database. When defining the data source query, do a SELECT FROM the staging table where the data is first imported into.

Following the above recommendations will help to ensure you do not run into a "garbage in, garbage out" scenario with ICA.

For more best practice articles on Symantec Information Centric Analytics see the following posts:

Information Centric Analytics Best Practices - Integration Wizard Troubleshooting

$
0
0

Troubleshooting Integration Wizard Data Import Issues into Staging Tables

This article will provide you with the SQL necessary to troubleshoot the IW_DataSourceQuery that will pull data in from the source into the Staging environment. 

1. The following query may be used to identify the DataSource Query that has failed during the nighty process.  The DataSourceQuery ID will be used to take a closer look at the failure in an effort to further troubleshoot this issue.

  • Execute this query to identify the DataSourceQueryID’s that currently have a status of 'F'.  These DataSourceQueryID’s will be used to take a closer look into the IW_DataSourceQuery History. 
SELECT * FROM IW_DataSourceQuery WHERE JobStatusFlag='F'
  • In this phase of the process, we will use the DataSourceQueryID identified in step 1.a to take a closer look at the Log Description for the error being produced.  StatusFlag will communicate to you the current status of the Job, C=Complete, R=Run and F=Failed.  The Log Description will provide you with the exact statement that was executed at the time of failure.  Error Description will provide you with the Description of the error thrown by SQL Server at the time of failure. 
SELECT * FROM  vIW_DataSourceQueryHistory WHERE DataSourceQueryID=<XX>

Troubleshooting Import Rule Mapping Issues

The entity creation process is a two-step process that will move the data from Staging into the pre-processing area.  Once the data hits the Processing Area, the data will then flow into the logical data ware house via the Entity Creation phase of the process. 

1. Identify the Import Rule Mapping that is experiencing a process failure by executing the query in a.  Note the LogGroupID as it will be used in query b when the Log_DataTransformation Query is executed

SELECT * FROM vIW_ImportRuleMappingHistory WHERE StatusFlag = 'F' ORDER BY EndDate DESC
SELECT * FROM Log_DataTransformation WHERE LogGroupID = 49
  • StoredProcedureName will identify the process that is running at the time of failure. 
  • LogDescription will contain the exact SQL Statement that was being executed at the time of failure.
  • StatusFlag will communicate to you the current status of the Job, C=Complete, R=Run and F=Failed.
  • ErrorCode and Error Description will provide the details of the error that occurred while the process was running.

Disabling an Integration Wizard Data Source Query

There are certain instances within Risk Fabric where we would like to disable a Data Source Query from running.  To do this, simply navigate to the Data Source tab in Admin > Integrations, right-click on a data source query and select Disable Query:

Deleting Formulas, Import Rules, Import Rule Mappings, Data Sources and Data Source Queries  

Removing Integration Packs

To delete a previously created Integration Pack, follow the steps below:

1. Open the ICA console.

2. Navigate to the Data Integrations tab under Admin > Integrations.

3. Delete existing Import Rule Mappings under the Integration Pack you want to remove.  Before deleting an Integration Pack, all Import Rules under the Integration Pack need to be deleted.  Similarly, deleting an Import Rule, all Import Rule Mappings under the Import Rule need to be deleted.  To delete Import Rule Mappings, simply right-click on the Import Rule Mapping you want to delete and select Delete Import Rule Mapping.

4. After deleting all the Import Rule Mappings under all the Import Rules included in the Integration Pack you want to delete, open SQL Management Studio and open a new query against the RiskFabric database. 

5. Execute the query below and note the ID of the Integration Pack you want to remove.

SELECT [ID], [Name] FROM IntegrationPacks

6. Execute the query below to delete all the Import Rules under the Integration Pack you want to remove, replacing <xx> with the value you retrieved in step 

DELETE FROM IW_ImportRule

WHERE IntegrationPackID = <xx>

7. Finally, execute the query below to delete the Integration Pack you want to remove, again replacing <xx> with the value you retrieved in step 5.

DELETE FROM IntegrationPacks

WHERE ID = <xx>

Deleting Formulas

As part of clean up, you may want to delete formulas that a deleted Import Rule Mapping was using.  To do this, follow to steps below to identify those formulas and delete them from the RiskFabric database.

1. Open SQL Management Studio.

2. Connect to the SQL server hosting the RiskFabric (ICA) database and open a new query against the RiskFabric database.

3. Execute the query below to identify formulas that are associated to Import Rule Mappings that no longer exist.  Take note of the FormulaIDs of the formulas you want to delete.

SELECT * FROM IW_Formula

WHERE FormulaID NOT IN

       (SELECT FormulaID FROM IW_ImportRuleMappingPreProcessColumnFormula

       )

AND FormulaID > 1000000

4. Execute the query below to delete the formulas, replacing <xx> with the list of FormulaIDs retrieved from the previous step separated by commas.

DELETE FROM IW_Formula

WHERE FormulaID IN (<xx>)

Removing Data Sources

To delete a previously created Data Source, follow the steps below:

1. Open the ICA console.

2. Navigate to the Data Integrations tab under Admin > Data Sources.

3. Delete existing Data Source Queries under the Integration Pack you want to remove.  Before deleting an Integration Pack, all Data Source Queries under the Integration Pack need to be deleted.  To delete a Data Source Query, simply right-click on the Data Source Query you want to delete and select Delete Query.

4. After deleting all the Data Source Queries under the Data Source you want to remove, note the name/label of the Data Source you want to remove.

5. Open SQL Management Studio and open a new query against the RiskFabric database.

6. Execute the following SQL query to retrieve the LinkedServerID which will be used to delete the Data Source in the next step.  Replace <Data Source Label> with the value of the Data Source Label you noted in step 4.

SELECT LinkedServerID, LinkedServerLabel, Host FROM LinkedServers

WHERE LinkedServerLabel = '<Data Source Label>'

NOTE:  The query above may return more than one row.  If this is the case, locate the correct one by checking the Host column.

1. Execute the following SQL query to delete the Data Source, replacing <xx> with the LinkedServerID value retrieved in the previous step:

DELETE FROM LinkedServers WHERE LinkedServerID = <xx>

For more best practice articles on Symantec Information Centric Analytics see the following posts:

Information Centric Analytics Best Practices - Risk Optimization

$
0
0

Risk Optimization in Information Centric Analytics

The mission of Symantec Information Centric Analytics is to allow enterprises to make the most of their limited resources by automating as much of the data analysis and threat hunting process as possible.  Symantec Information Centric Analytics is a highly configurable platform that performs automated threat hunting utilizing proprietary statistical and machine learning algorithms.  Using Symantec Information Centric Analytics, Level 1 analysts are provided with a pre-vetted list of top threats and vulnerabilities, including insider threats, compromised accounts, vulnerable/infected machines, and exposed data.  The vetted list is the result of the Information Centric Analytics platform’s data ingestion, enrichment and analytics process that automatically performs the initial threat hunting. This relieves Level 1 analysts from having to pour through all that data from many different sources to figure out who should be investigated, allowing them to focus on vetting and escalation or resolution. 

Information Centric Analytics out of the box Scenarios and Risk Models provide a baseline for identifying different kinds of insider threats and cyber breaches, based on common data sources found in enterprise environments.  These default Scenarios and Risk Models are a starting point, from which risk models and risk vectors need to be optimized and augmented to best reflect the available data sources and business goals of the organization.

The technical objective of the administrator performing the optimization is to align risk models with the organization’s desired use cases, available data and prevalent types of threats seen in each particular environment.  For example, if an environment does not include a Cloud Access Security Broker (CASB), or it is not being fed into Information Centric Analytics, there is little point in having CASB related scenarios included in a Risk Model.   Similarly, if the company’s customer list is managed in a cloud application for which there is not currently a scenario configured, it certainly should be created and included in relevant risk models. 


Risk models with multiple missing cards make good candidates for optimization

On a macro level, any given Risk Model should tell a story to the analyst without too much complexity and should result in a consumable list of Risk Model instances (the number of people/users triggering the model).  Too complex and/or too many instances mean that the Risk Model is casting too wide a net and is going to result in increased false positives.  Too simple and/or too few instances means that the Risk Model has too narrow a view and is going to result in increased false negatives (ie. missing threats).  The “goldilocks” just right Risk Model will typically include between 2-4 “cards” triggered per stage of the model, across no more than 6 stages, with any one model resulting in a number of instances that are less than 1% of the total population of people being analyzed.  For example, in an enterprise with 50k people being analyzed, no one risk model should result in more than 500 people being identified as matching the model.


Risk model triggering multiple scenarios across all steps

Just like risk models tell a story about a particular sequence of activities that highlight a person or user is a threat, risk vectors tell the story of a user/persons overall set of activities that may indicate that they are a risk.  Out of the box risk vectors use risk scores of users and people to align to common data sources and risk factors.   However, if a company’s environment does not include relevant data sources or they are not applicable to the business’ use cases, then they should be adjusted to best tell the story of user/person risk.  Nothing is more of a let down than drilling on a user/person, only to find their risk radar diagram and risk vectors mostly show zeros or factors that are irrelevant to the environment.  Conversely, a well populated risk radar diagram with risk vectors that align with prioritized use cases show value and accelerate threat hunting activities by elevating users/persons that need to be investigated.


Example of a well defined risk vector diagram for a user

Reviewing Risk Vectors in Analyzer

Ensuring risk vectors are functioning as expected is critical to accurately identifying which persons, users and computers should garner the most attention for investigation and remediation. Depending on the quality of the data being ingested, risk vectors may be adversely affected, throwing off risk scores and producing inconsistent results. The following process provides a simple and straightforward approach to analyzing risk vector results to easily pinpoint potential trouble spots and ensure a healthy environment.

1. From within Information Centric Analytics, create an Analyzer view to review Risk Vectors and how prominent they are individually and relative to each other. This view will inform you how well the Risk Vectors are balanced and configured relative to data sources and customer requirements. Drag in the following:

  • Risk Vector Entity Type (Dimension on rows)
  • Risk Vector (Dimension on rows)
  • Risk Vector Count (Measures)
  • Raw Score Max (Measures)
  • Raw Score Sum (Measures)

1. For each Risk Vector Entity Type:

  • In cases where the Risk Vectors that have a Risk Vector count of 0 or a count that is a small percentage of the total count of the entity type:
    1. For Risk Vector Count = 0, verify if the data source for the Vector is present, and if not, remove it from the entity’s risk scoring
    2. If the data source is present, review the parameters of the vector to see if the ranges are out of line with the customer’s environment
      • If the parameters appear in line with the organization’s environment and goals, then leave as is.  The other consideration is to adjust the Vector’s weighting up or down, based on the perceived riskiness of the activity.

2. For those Risk Vectors that have a Risk Vector count in line with the total count of the entity type:

  • Review the Raw Score Max relative to Raw Score Sum. In general, high risk vector counts where the Raw Score Max is more than 5% of the Raw Score Sum (assuming a large sample size) indicates that one account is dominating, and the vector probably needs to be adjusted. Add the User – Account Name dimension to help determine which account is skewing the data.
  • If the Raw Score Max or Raw Score Sum is very low compared to the Risk Vector Count, then the purpose and effectiveness of the Vector should be reviewed.
  • If the Raw Score Max or Raw Score Sum are extraordinarily high for the event type, then the purpose and effectiveness of the Vector should be reviewed, perhaps as a candidate to be split into multiple vectors (beware of the event type – if the Vector is web hits, that will naturally be a high number compared to DIM policy violations, which should be by nature a much lower order of magnitude).

Risk Model Optimization Process

The process of optimizing risk models is part art and part science, to ensure Information Centric Analytics is presenting a picture that demonstrates an organization’s target use case and effectively catch the highest risk activities, while also minimizing false positives/negatives.  Before beginning the optimization process, it is important that you understand the desired outcomes, the organization’s priorities when it comes to measuring the risk of people/users/endpoints, etc., and the available data sources integrated.  Having this knowledge ahead of time will drive an informed optimization process and ensure that it hits the mark for the organization.  The optimization process uses Symantec Information Centric Analytics’ scenarios, risk models and analyzer ad-hoc analysis capability. 

General Risk Model and Event Scenario Analysis

Using the built in Analyzer capabilities within Symantec Information Centric Analytics, we can inventory available Event Scenarios and their Instance Counts to get a better idea of what scenarios are being triggered and identify if there are any that may be producing too many results – making a good candidate for tuning.

1. Open three instances of Symantec Information Centric Analytics in 2 different browser tabs

2. In the first tab, create an Analyzer view showing event scenario created date range, event scenarios and their instance counts.  This view will inform you how long each scenario has been around and how often it is triggered, allowing you to make tuning decisions. Drag in the following:

3. Event Scenario Instance Count (Measures sorted descending)

4. Event Scenario Name (Dimension on Rows)

5. Event Scenario Instance Created Date Range (Dimension on Rows)

6. In the second tab, create an Analyzer view to show how many people each risk model triggered for, how many cards have triggered and how often. To create this view, drag in the following:

  • Risk Model (Dimension on Rows)
  • Stage (Dimension on Rows)
  • Card Title (Dimension on Rows)
  • Instance Count (Measures)
  • Card Count (Measures)
  • Card Event Count (Measures)

7. Note those Risk Models that have an Instance Count > 1% of the number of People/Users/Computers being analyzed, based on each Risk Model’s focus entity

8. Expand all Risk Models, Stages and Card Titles

9. Look for Card Titles with a Card Count of 0

  • Note those Card Titles with data types that are not being ingested (note the containing Risk Model and Stage as well, and consider removing or replacing these cards)
  • Note those Card Titles data types that are being ingested (these Cards should be reviewed to ensure data is coming in correctly)

10. Note Risk Model + Stages that have fewer than 2 or more than 4 Card Titles with Card Instance Count > 0

  • Note the Card Instance Counts and Card Event Counts
  • Consider parameter adjustments on Cards with high counts (specifically instance counts greater than 10,000) that would decrease the Card Counts while not reducing the Card’s efficacy

Additional Risk Model Review – Assessing Use Cases

When reviewing risk models, it is useful to consider the organization’s fundamental use cases and data sources currently being integrated, as this will ultimately dictate what actions should be taken for tuning.

1. Identify risk models that are applicable to the company’s use cases

2. For Risk Models aligned to the company’s use cases and have the right sources currently integrated:

  • Iteratively adjust cards and parameters for appropriate hit rates

3. For Risk Models aligned to company’s use cases that are missing the sources listed:

  • If alternative sources are available, create cards with parameters to utilize alternatives
  • If alternative sources are NOT available, cards should be removed from risk models, and additional capabilities should be considered if sources are integrated in the future

4. For Risk Models NOT aligned to the company’s use cases that have the right sources:

  • Iteratively adjust cards and parameters for appropriate hit rates:

5. For example, the “Compromised User” risk model requires some indicators of attack/compromise and/or threat intelligence.  Even if the organization is focused on malicious insider, if endpoint events and/or threat intel is available, leave the model on to take advantage of the added value it provides out of the box (assuming that the model populates properly from these sources).  Otherwise, disable the scenarios associated to the risk model, or delete the risk model.

6. Risk Models NOT aligned to the company’s use cases that do NOT have the right sources

  • Disable scenarios associated to the risk model or delete the Risk Model
  • For example, the “Cyber Breach Data Loss prediction” risk model is data at rest focused – If that is not the focus of the implementation, and there is no DAR data, it should be shut off. 

Conclusion

Security is a process and not an end result, as and such, there is no perfect configuration.  Getting to Wow in Information Centric Analytics is all about the out of the box configuration telling users a balanced story that is relevant to their role and drives decisions.  The above process explains the moving parts of the process, and as experience is gained through continued optimization, the elements that require “tweaking” will stand out naturally, because “it just doesn’t look right”.  Experimentation is often encouraged to strike the right balance and ensure Information Centric Analytics becomes a finely tuned machine to help maximize the value of risk analysis. 

For more best practice articles on Symantec Information Centric Analytics see the following posts:

SEP - Scripts

$
0
0

Following on from an excellent Article 'Handy SQL Queries for SEPM v14' from Tony Sutton I thought it would be useful to create a site to collate all this information in an easy to use format.

This include Syntax Highlighting and a paged list of scripts you can search.

There is also a "Copy" button to quick paste the script into SSMS or your IDE of choice.

Symantec SEP Scripts

As it's hosted on GitHub

Anyone can contribute.

If you have a piece of SQL or a handy script just create a PR or add it to an Issue and I'll add it to the list.

There's a scripts.json file which contains all the meta data about this scripts, this allows the tagging of versions, author and source(s) of the script.

If you know of any other scripts in the forum do comment below and I'll get them add too.

Protirus.png

Protect Symantec DCS agent

$
0
0

Good day to all Symantec DCS admins.

I want to provide you with a solution to one problem.

I hope that my decision is correct and will be useful.

Sincerely.

Dima

Recently, I came to the client to solve a rather simple task:

  1. Install Symantec DCS in a test environment.
  2. Install DCS agent on eight servers.
  3. Customize and verify only detection .

After a successful installation, configuration and verification I was asked: How can we protect the Symantec DCS agent from stoping or change configuration?

To be honest, all my previous installations of Symantec DCS included first of all Prevention opportunities and as an optional Detection.

Not finding anything in the documentation and on the internet, I decided to open case in Symantec support.

First, I asked the question: How to protect the DCS agent if only the Detection policy is applied.

The answer:  Unfortunately, if only the IDS (Detection) Policy has been applied then the Prevention is disabled and there is no possibility to protect the Agent. Only IPS (Prevention) Policy is able to protect the Agent. There is also no mechanism available like in SEP Client to protect the Agent with a password.

Then, I asked: Maybe there is a Prevention policy that protects only the DCS agent?

The answer: Unfortunately there is no IPS Policy which protects only the DCS Agent, because by default each IPS Policy is protecting the OS files, so there is no possibility to indicate only the files belonging to the Agent. This is a result of design which has been introduced in the product at the very beginning, therefore the only way to change it is to submit the Request For Enhancement to Product Management.

Not satisfied with the answer, I decided to check it myself in my lab and it seems I did it.

The Lab infrustracture

Symantec DCS Server  Advanced 6.8

Installed on Windows Server 2012 R2

Symantec DCS agent

Deployed to  Windows 10 with many software preinstalled.

Detection Policy based on “Windows_Template_Policy”

Rules: File Watch, Registry Watch, NT_Event_Log, Text_Log

Prevention Policy based on “sym_win_targeted_prevention_sbp”

Configuration:

  1. Prevention Enabled
  2. Global Policy Option
    1. Policy Ovveride
      1. User Ovveride
        1. Allow cpecific users to Disable prevention copletely
          1. Add user “User”
      2. SDCSS Agent Tools
        1. Ensure cpecific users are allowed to run thr SDCSS Configuration Tools
          1. Add user “User”
  3. Sundboxes
    1. Kernel Driver Options – Disabled
    2. Remote File Access Options - Disabled
    3. Symantec Data Center Security Server Agent – Enabled
    4. Symantec Data Center Security Server Manager – Disabled
    5. Default Pset Options – Enabled
      1. Enable SDCSS Selfe Protection

After checking the logs for several days on the server console and on the agent, everything looks so that the policy does not pay attention to the processes of the Operating System and the installed software but does not give access to the agent to anyone except the configured user.

Android malware - “Agent Smith”

$
0
0

A new android malware, ‘Agent Smith’ recently infected over 25 million android mobile devices worldwide. The malware exploits android vulnerabilities to replace legitimate apps with malicious imitation.

The attackers behind ‘Agent Smith’ made use of fraudulent ads for financial gains.

The malware doesn’t steal data from a user. Instead, it hacks apps and forces them to display more ads or takes credit for the ads they already display so that the malware’s operator can profit off the fraudulent views. The malware looks for known apps on a device, such as WhatsApp, Opera Mini, or Flipkart, then replaces portions of their code and prevents them from being updated.

Agent Smith has primarily infected devices in India and other nearby countries. That’s because the main way it’s spread is through a third-party app store called 9Apps that’s popular in that region. After a user downloaded one, the malware would disguise itself as a Google-related app, with a name like “Google Updater,” and then begin the process of replacing code.

In the future, they may even steal sensitive information, including private messages to banking credentials and much more.

How does Agent Smith malware work?

  • Attackers lure users into installing ‘Agent Smith’ infected apps containing encrypted malicious files.
  • The application decrypts these files and installs the malware on the device.
  • The malware then infects existing, legitimate apps in the device.

How to stay safe?

  • Always download apps from the official Google Play store, which is better regulated.
  • Check all the permissions before installing an app.
  • Ensure the device’s operating system and apps are up-to-date.

Have you checked if your device has protection? We recommend proceeding with caution when downloading apps, even from legitimate sources. 

Try SEP Mobile

For DataSheet

Managing Private Kubernetes Clusters with Secure Access Cloud

$
0
0

Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management.It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts".

Developers and DevOps engineers managging Kubernetes Clusters and/or applications deployed within these clusters require access to the Kubernetes Controllers and Nodes using various methods. While such an access can be achieved via either Virtual Private Networks or by exposing management endpoints of the Kubernetes Cluster via the internet, providing a Zero Trust Network access (ZTNA, also known as Software-Defined Perimeter) access to these critical management interfaces is the most secure way. This article explains the basic architecture and the steps required to provide access for managing Kubernetes Clusters, without assuming that the acceesing party has any network access to the deployed cluster.

While the method supports limitless possibilities for management access, in this article we will focus on the following three methods:

  1. Management using kubectl utility 
  2. Management using Web Dashboards and REST APIs
  3. Management using SSH connections to the Kubernetes Cluster machines

Below diagram demonstrates an architecture for managing Kubernetes clusters via Secure Access Cloud without bastion servers:

When using this approach, there is no need to manage any bastion hosts or any other additional components in the private network. On the other side, the developers / administrators will need to have an abiltiy to authenticate directly to the K8S Cluster API server and additional components from the accessing endpoint. Naturally, the accessing parties will first need to be authenticated and authorized via Secure Access Cloud.

This approach will allow kubernetes adminsitrators to execute management commands using interfaces, such as kubectl, locally on their endpoint devices.

This alternative diagram demonstrates an option for an architecture for managing Kubernetes clusters via Secure Access Cloud with bastion servers / utility hosts located in the private networks hosting the clusters:

When considering this approach, the authentication tokens (certificates, user/role tokens, etc...) can reside only within the Bastion Hosts and the accessing parties will need to authenticate (and be authorized) to the Bastion environment in order to be able to access the internal components. In this approach, the administrators can only run kubectl and similar tools on the Bastion hosts.

As a pre-condition to the following steps, Secure Access Cloud connectors need to be deployed in the private network hosting the Kubernetes Clusters. Following guide described the deployment of the connectors: Deploying Secure Access Cloud Connector as Docker Container

Connecting with kubectl

There are two possible approaches when dealing with kubectl connections: Connecting via Bastion host or connecting directly from the user's workstation.

Connecting via Bastion host

In this configuration, kubectl utility, as well as the relevant authentication / authorization environment only exist at the bastion host level. In order to access this environment and perform management operations, the accessing user will need to create an interactive SSH session to the Bastion host. To access the bastion host, it will need to be configured as an "SSH Application" in the Secure Access Cloud admin portal, according to the following document: Access to SSH Servers via Luminate

Connecting directly from the user's workstation

In order to connect directly from a kubectl utility running on the end users' workstation, following steps should be taken:

1. Kubernetes API Endpoint Port needs to be configured in Secure Access Cloud as a TCP Tunnel, according to the following article. This step needs to be repeated for every Kubernetes Cluster being administrated. If Kubernetes Clusters are being defined dynamically, using a Secure Access Cloud Terraform provider is recommended.

2. KubeConfig file (usually located at ~/.kube/config) should be modified for each cluster in the following manner:

   i. In the clusters section of the configuration file, the "server" key should be replaced with a local port (see the steps below regarding the suggestions on local port configuration)
   ii. If the HTTPs certificate issued to the Kubernetes API HTTPS Endpoint cannot be changed, modify the hosts file to point the domain to the localhost IP (doesn't necessarily have to be 127.0.0.1, as long as it is an IP address that the TCP Tunnel is being opened to). Alternatively, modify the KubeConfig configuration file or the kubectl command line to contain the insecure-skip-tls-verify flag. Using the flag in this case will not be as challenging from the security perspective as usual, due to end-to-end authorization during the creation of a TCP Tunnel by Secure Access Cloud.

3. TCP Tunnel should be established, as described in the following article. Multiple ports can be selected in case multiple Kubernetes clusters should be accessible. Optionally, a KubeConfig file can be modified using the exec command to open the relevant ports automatically, using the SSH Key from Secure Access Cloud portal.

Connecting with Web Dashboards

Connecting with SSH


Endpoint Security- Best Practices for Companies and Employees

$
0
0

What Is Endpoint Security?

Any device that is connected to an organization’s network systems is known as an endpoint.

Endpoint security is the protection and monitoring of end-user devices, such as smartphones, laptops, desktop PCs and POS devices, and network access paths, such as open ports or website logins. It goes beyond antivirus tools and includes the use of security software, like Endpoint Detection and Response (EDR) tools, on central servers as well as tools on the device itself, such as ad blockers.

Tools used for endpoint protection typically include features for the detection of intrusions, such as bypassed firewalls, and behavior analysis, such as login attempts by multiple users from the same IP address. EDR security is vital to the protection of a company’s data as it secures the entry points that attackers might exploit to gain access to valuable information.

Because of the outbreaks of various cyberattacks such as WannaCry ransomware, fileless malware, and more, network security has gained the attention that it actually deserves. Enterprises are now well-aware of the potential damage that can occur when hit by any such cyberattacks. As a result, companies have started deploying advanced security measures, powerful firewalls, and solid monitoring systems for networks and datacenters. 

Endpoint security proves much more diverse than it initially appears. Far from just protecting your digital perimeter, these solutions protect huge lengths of your IT environment. In fact, you consider the components of the solutions as types of endpoint security; these components can serve as individual solutions for your enterprise. 

However, endpoint protection platforms serve as a way to enjoy the benefits of these types of endpoint security. Usually, this better suits your enterprise in the modern cybersecurity era—missing a crucial component could spell doom for your business.

Here are the 11 types of Endpoint Security you need to know:

1. Internet of Things (IoT) Security

IoT devices are becoming more ubiquitous in enterprise infrastructures as they help facilitate communications and business processes. Unfortunately, IoT devices generally inherent endpoint security. Manufacturers don’t prioritize IoT security in their products or place poor protections on those devices.

To combat this issue, providers offer IoT security as one of the types of endpoint security for enterprises. In fact, these solutions work to improve visibility in IoT devices, provide a consistent and easily upgradable layer of cybersecurity, and close security vulnerabilities into the network.

2. Antivirus Solutions

Perhaps one of the most popular and well-recognized types of endpoint security, antivirus solutions still provide critical capabilities. These include anti-malware capabilities.

As such, enterprises can protect themselves against signature-based attacks, which still arise on occasion. Additionally, antivirus solutions can scan files for malicious threats via consulting against threat intelligence databases. Enterprises can install antivirus solutions directly onto their endpoints to identify unknown signatures.

However, antivirus solutions often prove limited in defending against more advanced cyber threats. Moreover, enterprises often rely too much on antivirus alone for their digital perimeter. Of the types of endpoint security, this one certainly needs the support of others.

3. Endpoint Detection and Response

A darling among the other endpoint security tools, EDR offers a capability which fits with the detection-mitigation model of modern cybersecurity. Indeed, EDR solutions continuously monitor all files and applications entering your enterprise’s endpoints. Additionally, EDR solutions can offer granular visibility, threat investigations, and detection of fileless malware and ransomware. Also, EDR provides your investigation teams with alerts for easy potential threat identification and remediation.

4. URL Filtering

URL filtering works to restricts web traffic to trusted websites; in turn, this prevents users from accessing malicious websites or websites with potentially harmful content. As an added bonus, URL filtering can prevent surreptitious downloads on your network, granting you more control over what gets downloaded where and by whom.

5. Application Control

Unsurprisingly, application control does exactly what it says on the tin; it controls applications’ permissions, ensuring strict restrictions on what they can or cannot do. To accomplish this, it uses whitelisting, blacklisting, and gray-listing to prevent malicious applications from running and compromised applications from running in dangerous ways. As enterprises continue to embrace the cloud and the potential of third-party applications in their business processes, this proves incredibly important.

6. Network Access Control

Surprisingly, network access control overlaps with identity and access management. After all, its primary focus is on securing access to network nodes. As a result, network access control determines what devices and users can access and do what on your network infrastructure. Among the types of endpoint security listed here, this one emphasizes the importance of firewalls and data limitations the most.

7. Browser Isolation

The threat facing web browsers can prove overwhelming to comprehend on a first look: surprise downloads, zero-day attacks, ransomware, cryptojacking malware, and malicious browser-executable code. Moreover, these merely skim the surface of potential cyber attacks. Browser isolation works to execute browsing sessions in isolated environments where it cannot reach valuable digital assets. Therefore, activity remains restricted to isolated environments and safe interactive media streams. Additionally, the tool destroys web browser codes after the user finishes browsing.

8. Cloud Perimeter Security

Endpoint security can no longer merely concern itself with your users’ devices. In addition, it must form a protective perimeter around your cloud environments and databases. Cloud providers are not responsible for your enterprise’s cybersecurity; hackers can target your cloud-stored assets with impunity unless you intervene.

Cloud perimeter security allows your enterprise to harden your cloud infrastructure against incoming threats.  

9. Endpoint Encryption

Among the types of endpoint security, encryption often suffers from the most neglect. Yet its capabilities contribute meaningfully to any business’ digital perimeter. It prevents issues such as data leaks (whether intentional or not) via data transfer by fully encrypting that data. Specifically, it encrypts data stored on endpoints.

10. Secure Email Gateways

Email constitutes the main means of data traffic entering and exiting your digital network. Thusly, hackers exploit emails to conceal and transmit their attacks through emails more than any other attack vector. In fact, they could use email as their malware-delivery system as much as 90% of the time if not more.

Secure email gateways monitor incoming and outgoing messages for suspicious behavior, preventing them from being delivered. They can be deployed according to your IT infrastructure to prevent phishing attacks.

11. Sandboxing

A “sandbox” serves as an isolated and secure digital environment which perfectly replicates your typical end-user operating system. As such, it can contain potential threats for observation. Your IT security team can then determine their intentions before allowing them into the network proper. This tool can help contain zero-day threats and works well against zero-day attacks.

Endpoint security is of vital importance, and any negligence in doing so can prove fatal to an Enterprise. With employees relying more on smartphones and home PCs or laptops to connect to the organization’s network to proceed with their work, a centralized security solution that works within an organization will no longer serve the purpose of securing the endpoints.

Here are the best practices that should be followed by Enterprise Organizations:

1. Stop usage of Common Passwords and use multi-factor authentication

Passwords are your first line of security defense. Cybercriminals attempting to infiltrate your network will start by trying the most common passwords.

BEST PRACTICE: Ensure use of long (over 8 characters), complex (include lower case, upper case, numbers and non-alpha characters) passwords.

If you’re still relying on usernames and passwords, your systems are not secure.

All endpoints in the organization must follow multi-factor authentication such as one-time passwords, biometrics such as fingerprint, face, or retina scanning along with the regular username and passwords.

2. Secure Every Entrance

All it takes is one open door to allow a cybercriminal to enter your network. Just like you secure your home by locking the front door, the back door and all the windows, think about protecting your network in the same way.

Consider all the ways someone could enter your network, then ensure that only authorized users can do so.

  • Ensure strong passwords on laptops, smartphones, tablets, and WIFI access points.
  • Use a Firewall with Threat Prevention to protect access to your network.
  • Secure your endpoints (laptops, desktops) with security software such as Anti-virus, Anti-SPAM and Anti-Phishing.
  • Protect from a common attack method by instructing employees not to plug in unknown USB devices.

3. Define, Educate and Enforce Policy

Have a security policy and use your Threat Prevention device to its full capacity. Spend some time thinking about what applications you want to allow in your network and what apps you do NOT want to run in your network. Educate your employees on acceptable use of the company network. Make it official.

Then enforce it where you can. Monitor for policy violations and excessive bandwidth use.

  • Set up an Appropriate Use Policy for allowed/disallowed apps and websites.
  • Do not allow risky applications such as Bit Torrent or other Peer-to-Peer file sharing applications, which are a very common methods of distributing malicious software.
  • Block TOR and other anonymizers that seek to hide behavior or circumvent security.
  • Think about Social Media while developing policy

4. Be Socially Aware

Social media sites are a gold mind for cybercriminals looking to gain information on people, improving their success rate for attacks. Attacks such as phishing, spearphish or social engineering all start with collecting personal data on individuals.

  • Educate employees to be cautious with sharing on social media sites, even in their personal accounts.
  • Let users know that cybercriminals build profiles of company employees to make phishing and social engineering attacks more successful.
  • Train employees on privacy settings on social media sites to protect their personal information.
  • Users should be careful of what they share, since cybercriminals could guess security answers (such as your dog’s name) to reset passwords and gain access to accounts.

5. Keep your systems updated

Keeping your systems updated in terms of hardware and software is one of the most fundamental measures to be taken in order to avoid cyberattack. Yet, a considerable number of cyberattacks and issues are being reported due to outdated systems globally. Keeping systems updated and adapting to the market is one of the best and easy ways to stay safe.

Apart from updating central software and network systems, companies need to make sure that all end devices’ firmware must be updated to the latest version. If this is left to the users to update their devices, they may skip this or put it off. Therefore, companies either have to force their employees to update the devices or simply roll out forced updates to the devices. This makes sure that all devices in an enterprise are updated, leaving no room for compatibility issues or malware attacks. Managing endpoints from a central location will make this process easy.

6. Encrypt Everything

One data breach could be devastating to your company or your reputation. Protect your data by encrypting sensitive data and make it easy for your employees to do so.

Ensure encryption is part of your corporate policy.

  • Sleep easy if laptops are lost or stolen by ensuring company owned laptops have pre-boot encryption installed.
  • Buy hard drives and USB drives with encryption built in.
  • Use strong encryption on your wireless network (consider WPA2 with AES encryption).
  • Protect your data from eavesdroppers by encrypting wireless communication using VPN (Virtual Private Network).

7. Segment Your Network

A way to protect your network is to separate your network into zones and protect the zones appropriately. One zone may be for critical work only, where another may be a guest zone where customers can surf the internet, but not access your work network.

Segment your network and place more rigid security requirements where needed.

  • Public facing web servers should not be allowed to access your internal network.
  • You may allow guest access, but do not allow guests on your internal network.
  • Consider separating your network according to various business functions (customer records, Finance, general employees).

8. Maintain Your Network and Disable ports you don’t need

Your network, and all its connected components, should run like a well-oiled machine. Regular maintenance will ensure it continues to roll along at peak performance and hit few speed bumps.

  • Ensure operating systems of laptops and servers are updated (Windows Update is turned on for all Systems).
  • Uninstall software that isn’t needed so you don’t have to check for regular updates (e.g., Java).
  • Update browser, Flash, Adobe and applications on your servers and laptops.
  • Turn on automatic updates where available: Windows, Chrome, Firefox, and Adobe.
  • Use an Intrusion Prevention System (IPS) device to prevent attacks on non-updated laptops.

Unsecured or open ports serve as an easy means of intrusion and have been the entry point for many recent and destructive cyberattacks. Every organization must secure all network ports and disable ports that are not in use. Every endpoint must be port restricted and every port must be secured to make sure end users are using only what is needed. Endpoint devices such as Bluetooth/infrared devices and modems must be disabled when not in use.

9. Endpoint security: Enforcing least privilege

Enforcing privilege security on endpoints should be a fundamental part of any business’s essentials. Applying advanced security measures and firewalls are now not enough to secure endpoints in a corporate network. To stay secure, enterprises need to follow a more sophisticated approach, and the principle of least privilege is one such effective method of securing the endpoints.

By following the principle of least privilege, only the minimum privileges or permissions are given to the employees. This ensures that not everyone is provided with administrative access, which they don’t really require. Having more than required privilege poses a threat of numerous errant or malicious actions, which can be performed at an endpoint. There is also a possibility that devices having administrative rights can be used as a means of corrupting the entire organization’s network system. Enforcing least privilege will also contain and reduce the impact of cyberattacks on endpoints.

10. Cloud Caution

Cloud storage and applications are all the rage, but be cautious. Any content that is moved to the cloud is no longer in your control. And cybercriminals are taking advantage of weaker security of some Cloud providers.

  • When using the Cloud, assume content sent is no longer private.
  • Encrypt content before sending (including system backups).
  • Check the security of your Cloud provider.
  • Don’t use the same password everywhere, especially Cloud passwords.

11: Don’t Let Everyone Administrate

Laptops can be accessed via user accounts or administrative accounts. Administrative access allows users much more freedom and power on their laptops, but that power moves to the cybercriminal if the administrator account is hacked.

  • Don't allow employees to use a Windows account with Administrator privileges for day-to-day activities.
  • Limiting employees to User Account access reduces the ability for malicious software (better known as malware) to do extensive damage at the "administrator" privileged level.
  • Make it a habit to change default passwords on all devices, including laptops, servers, routers, gateways and network printers.

12. BYOD Policy

Start with creating a Bring-Your-Own-Device policy. Many companies have avoided the topic, but it’s a trend that continues to push forward.  

It comes back to educating the user.

  • Consider allowing only guest access (internet only) for employee owned devices.
  • Enforce password locks on user owned devices.
  • Access sensitive information only through encrypted VPN.
  • Don’t allow storage of sensitive information on personal devices (such as customer contacts or credit card information).
  • Have a plan if an employee loses their device.

Best practices for Endpoint Security Solutions covering:

  • Antivirus
  • Device control
  • Host-IPS
  • Behavioral protections
  • Location awareness
  • Network access control
  • Application control

The first tip pertains to the select of an endpoint security solution: regardless of what tool(s) you select, look for native support of Active Directory (AD) and the ability to support the types of devices that you have. This will make it so much easier to control everything from one vantage point.

1. Identify Users/Workstations

AD security groups are by far the most versatile for which to match your security policy. A simple and basic approach is to define the following groups in AD:

    • Workstations: laptops/desktops
    • Security groups: IT admins/users/guests

Of course, you can define additional groups as needed to provide more granularities with your security policies.

2. Document your Security Policies in a table

To help complete the above table, here are some best practices for each endpoint security project. Keep in mind that you will want to review them individually but then combine them into a single set of policies that can work hand-in-hand together to provide the best possible protection and control.

3. Best practices for Antivirus

    • Schedule a full scan once a week as a minimum, preferably at lunch time. For the laptops, a full scan should be triggered every time they make a connection to the corporate network.
    • Enforce full scans on removable devices when each is plugged in.
    • The AV signature updates should be performed every three hours.
    • Configure the workstation to directly download signature updates from the AV vendor's public online server(s) in case your internal AV server is offline due to hardware or software issues.

4. Best practices for Device Control

    • Wi-Fi must be disabled inside the corporate network. This should also be applied to all workstations, laptops and servers. Wi-Fi USB keys can be found everywhere for $20, and these need to be controlled.
    • Modems, Bluetooth and infrared must be disabled to prevent any communications that are not controlled by corporate policy.
    • U3 features in USB keys must be disabled as they can be used as a falsely-detected (fake) CD-ROM drive, enabling malware to corrupt this component to run automatically on the workstation. When browsing removable devices on the endpoint, the U3 CD-ROM can be mistaken to be the real CD-ROM drive.
    • Audit all devices that are plugged in and capture all activity when files are written to removable devices. This will allow you to monitor the extraction of information, giving you a view into how your USB devices are used. With this information, additional policies can be set based on your findings.
    • Block access to any executables and scripts from removable devices and the CD or DVD drives. This will prevent any malware from running as a result of any unknown vulnerability being exploited before it gets patched.
    • Encrypt all of the data written to high-volume removable storage devices such as CD, DVD and USB backup volumes.

All these controls and restrictions must have the capability to be temporarily disabled. This should be available through a built-in challenge/response or Captcha. This ensures that the temporary exception is controlled by the IT staff and can be managed to exist for a specified/limited amount of time.

5. Best practices for Host-IPS and Behavioral Protections

    • Keylogger protection: Most malware programs include some form of a keylogger engine to recover passwords, credit card numbers and other personal data. Be sure to enable keylogger protection as part of your host IPS policy.
    • Network monitoring: Set the policy to monitor any application attempting to make network connections. Unauthorized connections can help to detect a malware process attempting to call home.
    • Rootkit protection: Using a predefined whitelist of the drivers loaded by Windows, you can detect malware that appears on the surface to be valid but in fact has been signed with stolen certificates from the driver's hardware or software vendor (such as CF Stuxnet with the Realtech certificate).
    • Prevent DLL injections: The favorite technique used by malware programs to prevent the antivirus product from removing them is to inject themselves in a running DLL. Antivirus can't remove or quarantine a DLL that has already been loaded. Typically, malware will load itself in system processes like winlogon.exe or explorer.exe.

Using a learning mode or testing mode for intrusion prevention and behavioral protections is mandatory in order to be able to conduct a test-drive of the protection such that exceptions can be made for false-positives. This improves the level of trust when deploying the software as well, especially when it comes time to upgrade or install a new application as this is the action that will mostly trigger a false-positive.

Buffer overflow protections are now mandatory. A good example is the recent vulnerabilities targeting Microsoft Windows and Adobe Acrobat. The timeframe to receive the fix can be up to one month when exploit in the wild are on Internet in a matter of hours.

6. Best practices for Application Control

Cyber thieves will take advantage of the areas of your operating system that change frequently to support legitimate applications. Therefore, you must secure the windows registry to prevent the auto-load of malware:

    • AutoRun keys
    • Internet explorer ActiveX and module
    • Injection of DLL in the system (winlogon, etc.)
    • Windows services
    • Drivers

Prevent applications from copying executables or scripts to network shares. This will prevent worms from spreading inside the corporate network.

Prevent "Print Screen" and "Copy/Paste" capabilities within sensitive applications such as financial application and health record applications.

Enforce a rule that only allows specific applications to save files on a remote server.

7. Best practices for Location Awareness

The level of security must not only be based on the user that is currently logged in, but also on the location from which he is connecting and the context of his connection. This includes the type of connection, the level of security with the connection, and so on.

In the case of a laptop, the machine should possess three different policy levels depending upon its location: inside the corporate network, outside the corporate network, or connected to the Internet through a VPN. Other connection types may be blocked, such as attempting to connect to the Internet through an unsecured Wi-Fi connection that is not going through the corporate VPN.

To be able to determine the location, you need a solution that can detect the network interfaces that are activated (this is mandatory for VPN Control); can collect the IP information for the machine (IP address, DNS, etc…); and can use the local and network Active Directory information to determine the machines type, role, groups and so forth. It is dangerous to use a simple server presence to test the machine's location because if the server goes offline the location will no longer be valid and all your workstations will be operating under a false policy as if they were not connected to the corporate network.

Here is an example of how location settings will match most companies:

    • Location inside: With the LAN interface only activated, check that the workstation is authenticated with LDAP
    • Location VPN: With the VPN interface activated and the right IP address from the VPN subnet.
    • Location Outside: neither inside nor VPN.

With these three locations identified, the following policies can be applied:

    • Policy inside: White list network interface to only allow the LAN interface. This prevents any unexpected (potentially malicious or otherwise insecure) bridge across another network interface.
    • Policy VPN: Limit network incoming/outgoing connections to the minimum required. This helps with security, of course, but also helps to save on VPN bandwidth.
    • Policy outside: The network connection should be available for a limited time and only for purposes of establishing a VPN connection. The scenario of a user connected through a hotspot must first be tested. Then, the user should be allowed a window of opportunity (a good amount of time is three minutes) where they can open a Web connection (http/https) in order to pay for and authenticate to the hotspot portal (such as that of their hotel). Once authenticated to the hotspot, the VPN connection can be established.

8. Best practices for Network Access Control (NAC)

In order to put the basic NAC capabilities in place, 802.1x is the core layer that will prevent unauthorized workstations from connecting to the corporate network. The easiest way to accomplish this for a Windows-based environment is through Microsoft Active Directory and its built-in OS supplicant that is fully operational beginning with Windows XP SP2.

With 802.1x in place, the next step is to implement a network-based NAC implementation such as Cisco's NAC, Microsoft's NAP or Juniper's UAC. This will provide the necessary mechanisms to establish a workstation with in a VLAN based on its status (clean, quarantined, guest system, etc.).

Finally, an endpoint protection technology compatible with your NAC implementation rounds out the NAC capabilities as the endpoint agent will provide the in-depth health status of the workstation in addition to helping with the quarantining, cleansing/repair, and control of the workstation.

Here are the controls required to ensure a good level of NAC-based security policy:

    • Check that the workstation has all of the patches for the operating system and applications that could introduce vulnerabilities into the network environment (Microsoft Office, Adobe Acrobat and Flash, etc.)
    • Check that the antivirus status and signatures are up to date and that the system has performed routine scans with the latest signatures.
    • Check for the deployment/management (or lack or misconfiguration) of software installed and running (Microsoft SMS, Landesk, Altiris, etc.).

If a workstation fails on any one or all of the above checks, it should be placed in quarantine. While in quarantine, it should be limited to:

    • Receiving a notification explaining the status of the workstation to the user and the administrators should be notified by e-mail.
    • If 802.1x is available, the workstation should be placed within a dedicated quarantine virtual LAN.
    • It should be limited to "Read-only" on USB and other removable devices (that could be used to gain access via a wireless network, for example).
    • Network connections must be restricted to only allow for remediation activities, updates (items such as patches and signatures updates), and notifications (such as the e-mail gateway).
    • E-mail and Web browser applications should deny access to any files being downloaded, opened or uploaded in order to prevent worm spreading but allow the employee to work with his mail.
    • The endpoint protection product should provide automatic remediation, repairing and cleaning of the workstation without any administrator interaction, automatically moving the workstation from the quarantine VLAN to the production VLAN once complete.

In selecting an endpoint protection solution, the NAC health check should be fast -- less than a minute. Additionally, the endpoint protection's NAC capabilities should load immediately after the system is loaded. Finally, the endpoint protection product should provide the same NAC-level of protections for the endpoint even when the endpoint is not connected to a corporate network or VLAN.

Best Practices for Employees from Enterprise / Corporates / Small Business:

  • Always lock your computer before leaving your desk.
  • Use a strong password and change the password periodically.
  • Ensure encryption is enabled in the official asset.
  • Ensure that the antivirus agent is updated.
  • Ensure that the proxy (URL filtering) solution is installed and enabled.
  • Avoid connecting the official asset to unsecure Wi-Fi / internet connection.
  • Do not connect any unauthorized or unapproved devices to official systems.
  • Do not connect any external storage media without performing an antivirus scan.
  • Keep your laptops/desktops updated with the latest security patches.
  • Use only approved and licensed applications, remove unwanted software / plugins.

New CA Certificate coming for CloudSOC Gateway

$
0
0

For Customers on the Global CloudSOC tenant with Gateway using gw.elastica.net and mgw.elastica.net, the CA certificate used for the Symantec CloudSOC Gateway will expire 12/12/2019.

Due to this upcoming certificate expiration date, CloudSOC is planning to change our CA certificate on November 7th, 2019. The certificate contains the same security properties as before and will be required for encrypted traffic interception by “gw.elastica.net” and “mgw.elastica.net”.

This is not an optional upgrade– deprecated Certificates are a security risk.

More information available in the Knowledge Base article/Support Alert - https://support.symantec.com/us/en/article.ALERT2689.html

How to use Symantec Custom Inventory to report on Microsoft Office 365 Update Channel

$
0
0

Open SMP Console - > Settings -> All Settings -> Discovery and Inventory -> Inventory Solution -> Manage Custom Data Classes -> New data class -> Office 365 Update Channels 

Create a Custom Data Class 

Add a Attribute with the following Parameters:
Name: O365_Update_Channel
Data type: String 
Maximum size: 100 
Key: No 

Create a new Task 

Open SMP Console - > Manage -> Jobs and Tasks -> right click and select New -> Task -> Run Script

change „Script Type“ to VBScript and paste VBScript below…  
(also take a look at: https://support.symantec.com/us/en/article.howto126871.html

'Pick the appropriate WMI registry hive code and comment the line you don’t use 
Const HKEY_LOCAL_MACHINE = &H80000002 
Set wshShell = WScript.CreateObject( "WScript.Shell" ) 
ComputerName = wshShell.ExpandEnvironmentStrings( "%COMPUTERNAME%" ) 
 
set nse = WScript.CreateObject ("Altiris.AeXNSEvent") 
nse.To = "{1592B913-72F3-4C36-91D2-D4EDA21D2F96}"'Do not modify this GUID 
nse.Priority = 1 
dim objDCInstance 
set objDCInstance = nse.AddDataClass ("Office 365 Update Channels") 'Your Data Class Here 
set objDataClass = nse.AddDataBlock (objDCInstance) 
 
KeyPath = "SOFTWARE\Microsoft\Office\ClickToRun\Configuration"'Your Registry Key Path Here 
ValueName = "CDNBaseUrl"'Your Registy Entry Here 
Set oReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\default:StdRegProv") 
 
'Use the HKEY constant defined earlier and use the oReg function appropriate to the type of data in the registry entry 
error_returned = oReg.GetStringValue(HKEY_LOCAL_MACHINE,KeyPath,ValueName,Value) 
if error_returned <> 0 then 
KeyPath = "SOFTWARE\Microsoft\Office\ClickToRun\Configuration" 
error_returned = oReg.GetStringValue(HKEY_LOCAL_MACHINE,KeyPath,ValueName,Value) 
end if 
 
set objDataRow = objDataClass.AddRow 
objDataRow.SetField 0, Value 
'If your data class has more than one attribute add a line for each 
'objDataRow.SetField 1, Value2 
nse.Send 
'Uncomment the line below for testing purposes 
'MsgBox nse.Xml
Create a "New Schedule" to run the Task on all Clients Computers!

Check if the Data exists on a Client Computer

In this example it is the: O365Semi-Annual Channel 

Here is a list of Office 365 Update Channels 

Create Filters for the different Channels 

Open SMP Console - > Manage ->Filters -> create a new Folder or use an existing on -> right click -> New -> Filter -> Name the new Filter for example: Clients with O365 – Semi-Annual Channel (Targeted) ->switch Filter Definition to Query Mode: Raw SQL 

-> under Parameterized Query paste the following SQL Query:  
select vc.[guid] from Inv_Office_365_Update_Channels ic  
join vComputer vc on vc.[guid] = ic._ResourceGuid  
where ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/b8f9b850-328d-4355-9145-c59439a0c4cf' 

Repeat the steps to create a Filter for all Channels 

  • Semi-Annual Channel (Targeted) (already created) 
  • Monthly Channel 
  • Monthly Channel (Targeted) 
  • Semi-Annual Channel 

Monthly Channel 

select vc.[guid] from Inv_Office_365_Update_Channels ic  
join vComputer vc on vc.[guid] = ic._ResourceGuid  
where ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/492350f6-3a01-4f97-b9c0-c7c6ddf67d60' 

Monthly Channel (Targeted) 

select vc.[guid] from Inv_Office_365_Update_Channels ic  
join vComputer vc on vc.[guid] = ic._ResourceGuid  
where ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/64256afe-f5d9-4f86-8936-8840a6a4f5be' 

Semi-Annual Channel 

select vc.[guid] from Inv_Office_365_Update_Channels ic  
join vComputer vc on vc.[guid] = ic._ResourceGuid  
where ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114' 

Filters can also be used to change the Channel later if nessessary! 

Create a Report for O365 Update Channels 

Open SMP Console - >All Reports ->create a new Folder or use an existing one -> right click and select -> New ->Report-> SQL Report -> give the Report a name for example O365 Update Channel Version -> under Parameterized Query paste the following SQL Query 

select vc.name, vc.[ip address], vc.[os name], ic.[O365_Update_Channel],
case
when ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/492350f6-3a01-4f97-b9c0-c7c6ddf67d60' then 'O365 Monthly Channel' 
when ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/64256afe-f5d9-4f86-8936-8840a6a4f5be' then 'O365 Monthly Channel (Targeted)' 
when ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114' then 'O365 Semi-Annual Channel' 
when ic.[O365_Update_Channel] = 'http://officecdn.microsoft.com/pr/b8f9b850-328d-4355-9145-c59439a0c4cf' then 'O365 Semi-Annual Channel (Targeted)' 
else 'no Channel found' 
End as 'O365 Channel' 
from Inv_Office_365_Update_Channels ic  
join vComputer vc on vc.[guid] = ic._ResourceGuid

Network23

Batch file to delete corrupt SEP definitions

$
0
0

Hi All,

Please find the script to delete corrupt defs using batch  ( Use at your own risk). 

Tamper protection need to be turned off to delete defs or else you will receive Access Denied message

@echo off
cd "C:\Program Files (x86)\Symantec\Symantec Endpoint Protection\14.*\Bin"
smc -stop
timeout 20
;    rem NOTE: If you are unable to stop the Symantec Management Client
;    rem you will need to temporarily disable Tamper Protection.
;    rem    Please see the Technical Information at the bottom of this document for instructions

ECHO
ECHO =======================
ECHO Delete definition files
ECHO =======================
ECHO

del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\BashDefs\*.*"
Echo "Done"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\ccSubSDK_SCD_Defs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\EfaVTDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\HIDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\IPSDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\IronRevocationDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\IronSettingsDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\IronWhitelistDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\SRTSPSettingsDefs\*.*"
del /F /Q "C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Definitions\VirusDefs\*.*"

ECHO
ECHO ===========================
ECHO Remove values from Registry
ECHO ===========================
ECHO

REG delete "HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\SharedDefs\SDSDefs" /v DEFWATCH_10 /f
REG delete "HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\SharedDefs\SDSDefs" /v NAVCORP_70 /f
REG delete "HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\SharedDefs\SDSDefs" /v SRTSP /f

;    rem Start Symantec Services
ECHO
ECHO ==========================
ECHO Starting Symantec services
ECHO ==========================
ECHO

smc -start
timeout 20

;    rem Future addition of auto execute latest patch
;    rem cd %homepath%\Desktop

ECHO Definitions removed. Upload new definition files.
pause

SQL Query for machines with Outdated Agents

$
0
0

I wanted to share a SQL query that can be used to list machines with outdated Altiris Agents

select
    vc.Domain
    ,vc.name [Computer Name]
    ,vc.[OS Name]
    ,ah.[Last Configuration Request]
    ,ah.[Agent Version Health]
    ,agt.[Altiris Agent]
    ,agt.[Altiris Inventory Agent]
    ,agt.[Altiris Application Metering Agent]
    ,agt.[Altiris Software Update Agent]
    ,agt.[Software Management Solution Agent]
from vcomputer vc 
join vAC_AgentHealth ah on ah.ResourceGuid = vc.Guid and ah.[Agent Version Health] = 'NEEDSATTENTION'
join (
    select
    *
    from 
    (
        select guid
               ,[Agent Name], [Product Version]
        from vcomputer vc
        join Inv_AeX_AC_Client_Agent ca on ca._ResourceGuid = vc.Guid
        where [Agent Name] in 
        ('Altiris Agent',
        'Altiris Application Metering Agent',
        'Altiris Client Task Agent',
        'Altiris Client Task Scheduling Agent',
        'Altiris Inventory Agent',
        'Altiris Software Update Agent',
        'Deployment Solution Plug-in',
        'End User Notification Agent',
        'Inventory Rule Agent',
        'Software Delivery Results Pickup Agent',
        'Software Management Framework Agent',
        'Software Management Solution Agent')
    ) clients pivot(
        max([Product Version])
        for [Agent Name] in ( [Altiris Agent] , [Altiris Application Metering Agent], [Altiris Inventory Agent], [Altiris Software Update Agent], [Software Management Solution Agent] )
    ) piv
) agt on agt.Guid = vc.guid

SEPM Dashboard Malfunctioning

$
0
0

Hello All,

Many times SEPM dashboard is malfunctioning means showing incorrect values of "out of date, Up to date, Disabled" system count on Home screen.

Please refer below solution - 

Issue: Incorrect info on SEPM Dashboard

Cause of the Issue :

SEPM has a table AGENT_DEF_CACHE, which contains the current definition numbers used by agents for reporting.

Sometimes this table was not updated recently. It does not have the current definitions for all the active agents in SEPM. 

Solution:

Truncating the AGENT_DEF_CACHE table would fix the issue -

Follow below steps -

Take Backup of SEPM DB 
Stop the SEPM service.
Run this SQL Query :

TRUNCATE TABLE AGENT_DEF_CACHE;

Start the SEPM service.

How to configure Unmanaged Detector in SEPM 14.X

$
0
0

Unauthorized devices can connect to the network in many ways, such as physical access in a conference room or rogue wireless access points. To enforce policies on every endpoint, you must be able to quickly detect the presence of new devices. Unknown devices are the devices that are unmanaged and that do not run the client AV software. You must determine whether the devices are secure. You can enable any client as an unmanaged detector to detect the unknown devices.
 

Please refer below document to configure Unmanaged Detector.


Using the new End User Notification Task in 8.5 RU2

$
0
0

8.5 RU2 – New Feature: End User Notification Task 

This new feature introduces a new Plugin installed with the “Core-SMA” on the Client Computer called End User Notification Agent. The End User Notification Agent was introduced with the release of 8.5 GA wheras the new Task type “End User Notification Task” was first introduced with 8.5 RU2.  


End User Notification Agent Plugin is listed / visible in the SMP Console 


End User Notification Agent Plugin is not listed / visible on the SMP Agent itself 

In 8.5 RU2 a new Task is available to send End User Notifications. This new Task type can be found in the SMP Console -> Manage -> Jobs and Task -> right click on a Folder and select New -> Task -> End User Notification Task 

You must specify a Title, Body and a List of actions. (mandatory) 
You can also specify a Window size, but this is optional.  

Other options are to use HTML, URL and Binary, meaning you can send a URL that opens in the Notification Window using the installed Browser Engine. 

Binary allows you to send for example the content of a PDF File if the computer has a PDF Reader installed. 

In this blogpost I´ve created an End User Notification Task to let the IT-workers know when a computer installation is finished. 


simple Plain text sample


HTML sample

The example above shows that Tokens can also be used in Plain text and HTML. If you want to format the Notification text you should use HTML because far more formatting options are available.

In the list of actions, we can specify buttons that can be selected when the Notification Popup appears. You can specify up to 10 buttons separated by the “;” sign.

The list of actions gets translated if possible. In this example we specified “Cancel”. My “DEMOCOMPUTER” runs a german OS – “Cancel” will be translated to “Abbrechen” which is the german word for Cancel. 


simple Plain text with translation if possible


HTML with formatting options

Each “List of actions” button has a return code. The first action has a return value of 0, the second 1, the third 2, and so on… 

If the IT-worker clicks OK this will result in return code 0, if HR-Computer it will result in Return Code 1, if Sales-Computer it will result in Return Code 2, if Marketing-Computer -> Return Code 3…This allows to use conditions within the job. Based on what action the IT-worker clicks we can run different tasks. (Screenshot above).  

If the IT-worker clicks on a Button called HR-Computer, Sales Computer or Marketing Computer…. this will result in the installation of the appropriate software for the selected department.

I know there are other options to decide which software would be installed on those Client Computer, but this is a new option that is available since 8.5 RU2. Just to give you an idea on how you could leverage this feature (what could be accomplished using this feature).

During my testing with the End User Notification Task I´ve found that when using HTML as Body the Token replacement does not work as expected. Sometimes the Tokens are not replaced at all, or only a part of the used tokens are replaced. There is also a difference if you just start the single task or use the task within a job. This also shows different results for the tokens…. opened a Support Case for this issue TECH256545 and hope this will be fixed soon.

Information Centric Analytics Best Practices - Post Configuration Tasks

$
0
0

After installing Symantec Information Centric Analytics, there are several configuration settings that should be set to allow the product to perform optimally. Follow the best practices below just after you install the platform, and before you begin configuring integrations, to ensure you get the most out of Information Centric Analytics.

Analysis Services Database Configuration

There are some additional SQL Server Analysis Services setting that may help improve the performance of ICA.  Below are recommended configuration modifications:

1. Log in to Microsoft SQL Server Management Studio and connect to the Analysis Services database and ensure the following settings are configured within the General Settings properties (make sure that the Show Advanced option is selected in order to see all options listed below):

Settings

Requirement

ExternalCommandTimeout

360000

ExternalConnectionTimeout

360000

Memory\TotalMemoryLimit

In a shared environment with Microsoft SQL Server and Microsoft SSAS on same server: 45

NOTE: This should be set in conjunction with setting the SQL Server Relational Engine memory configuration to 50% of available server memory.

In shared environment with Microsoft SSAS on a standalone server: 75

ServerTimeout

360000

ThreadPool\Process\MaxThreads

150

ThreadPool\Process\MinThreads

1

ThreadPool\Query\MaxThreads

48

ThreadPool\Query\MinThreads

1

SQL Server Settings

There are some additional SQL Server settings that may help improve the performance of ICA.  Below are recommended configuration modifications:

Remote Server Connections

1. Open SQL Management Studio

2. Connect to the ICA database server using SQL Management Studio

3. Right-click on the SQL server on SQL Management Studio and select Properties

4. Select the Connections page

5. Check Allow remote connections to this server and set the Remote query timeout value to 0 (no timeout)

6. Click OK to save changes

Server Memory Options

The minimum and maximum server memory is used to configure the amount of memory, in megabytes to establish upper and lower limits of memory used by the buffer pool on the Microsoft SQL Server. SQL Server Engine starts with only the memory required to initialize. As the workload increases, it keeps acquiring the memory required to support the workload, and never acquires more than the level specified in max server memory. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647 MB.  A general rule of thumb is to leave the operating system 20% of the memory. 

1. Open SQL Management Studio

2. Connect to the ICA database server using SQL Management Studio

3. Right-click on the SQL server on SQL Management Studio and select Properties

4. Select the Memory page

5. Enter the appropriate memory size under Maximum server memory (in MB)

6. Click OK to save changes

Increasing SQL Server Agent Job History Retention

By Default, the history retention for the SQL Server Agent Jobs is only a few days or the last few runs of ALL SQL Server Jobs on the SQL Server.  Depending on server setup, there could be multiple jobs setup.  Each job will be “fighting” for a part of the job history log.  The SQL Agent Job History retention settings are for the SQL Server instance and not specific to any one job. By default, SQL Server Agent Job History is setup to purge all SQL Agent History records once the history log reaches a certain number of rows. Use the following steps to disable the size limit specified by the SQL Server Agent Properties.

1. Open an instance of SQL Server Management Studio (SSMS)

2. Connect to the ICA database server

3. In Object Explorer, expand the database server

4. Right Click on the SQL Server Agent and click on Properties

5. In the SQL Server Agent Properties window, select History

You have the following options:

1. (Not recommended) Limit size of job history log:

  • Maximum job history rows per job: specifies how my rows are retained for each job
  • Maximum job history log size (in rows): specifies how many rows are retained in the history log

2. (Recommended) Remove Agent History, Older than: specifies a cap on how long the SQL Server Agent Job History is retained.

For more best practice articles on Symantec Information Centric Analytics see the following posts:

SEP v14 client & macOS 10.15 Catalina

$
0
0

With the new macOS 10.15 (Catalina) released recently, the current Mac SEP client may or may not work with the current SEP 14 versions.

You will be pleased to know that macOS 10.15 Catalina will be officially supported in SEP 14.2 RU2 which is due to be released on or around mid-November 2019. Put this on your calendar!

A full list of Mac compatibility chart can be found at https://support.symantec.com/us/en/article.TECH131045.html including macOS 10.15 Catalina.

Time to get our test Macs upgraded to Catalina and ready to be tested as soon as SEP 14.2 RU2 appears! :)

What's your preferred testing method for this?

IT Management Suite 8.5 RU3 is now available

SMP - ASDK - 8.5 RU3

$
0
0

With the release of ITMS 8.5 RU3 there were some new Web Service methods added to the following:

FeatureDescription
Enhancements of the Symantec Administrator Software Development Kit (ASDK) application programming interface (API).

The Symantec ASDK provides APIs that you can use to automate and customize the Symantec Management Platform. You can call APIs through Web services, COM, and the Windows command line (CLI). In 8.5 RU3, the following new API methods for interfacing with Task Management are introduced:

■ CreateClientJob
■ CreateServerJob
■ AddTaskFirstToJob
■ AddTaskLastToJob
■ CreateJobCondition
■ AddJobConditionRules
■ AddTaskToJobConditionThenGroup
■ AddTaskToJobConditionElseGroup
■ RemoveJobCondition
■ RemoveNodeFromJob
■ RemoveAllNodesFromJob
■ ConfirmJobChanges

For more information, see the Symantec ASDK Help at

C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Symantec\ASDK
Enhancements of Patch Management Workflow Web Service application programming interface (API).

Patch Management Workflow Web Service is installed with Patch Management Solution. The service contains API that accesses the functionality of Notification Server (NS) and lets you perform various patch management actions.

You can access the service at

http://localhost/Altiris/patchmanagementcore/patchworkflowsvc.asmx

In 8.5 RU3, the following enhancements are introduced:

■ HTML Help page The page includes list of available methods, detailed method descriptions, and usage examples for some methods. You can access the page at

http://localhost/Altiris/patchmanagementcore/patchworkflowsvc.html

■ New API methods:

- CreateWindowsUpdateAssessmentTask
- CreateWindowsUpdateInstallationTask
- EditWindowsUpdateAssessmentTask
- EditWindowsUpdateInstallationTask

For more information, see the knowledge base article DOC11543.

Task Management Service

  • CreateClientJob
  • CreateServerJob
  • AddTaskFirstToJob
  • AddTaskLastToJob
  • CreateJobCondition
  • AddJobConditionRules
  • AddTaskToJobConditionThenGroup
  • AddTaskToJobConditionElseGroup
  • RemoveJobCondition
  • RemoveNodeFromJob
  • RemoveAllNodesFromJob
  • ConfirmJobChanges

Patch Management Service

  • CreateWindowsUpdateAssessmentTask
  • CreateWindowsUpdateInstallationTask
  • EditWindowsUpdateAssessmentTask
  • EditWindowsUpdateInstallationTask

You may want to update the Zero Day Patch Workflow Workflow to leverage these new methods:

Workflow Template - Zero Day Patch
https://www.symantec.com/connect/videos/workflow-template-zero-day-patch

---

To keep track of updates you can compare Web Service Methods using a tool I've written

---

Documentation

Symantec™ IT Management Suite 8.5 RU3 Release Notes
https://support.symantec.com/us/en/article.doc11603.html
doc11603

Viewing all 694 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>