Quantcast
Channel: Symantec Connect - Security - Articles
Viewing all 397 articles
Browse latest View live

SNAC LAN Enforcement: Prerequisites for Configuring IEEE 802.1X Port-Based Authentication in NON-TRANSPARENT MODE

$
0
0

SNAC LAN Enforcement: Prerequisites for Configuring IEEE 802.1X Port-Based Authentication in NON-TRANSPARENT MODE

Cisco mandated tasks

The following Cisco mandated tasks must be completed before implementing the IEEE 802.1X Port-Based Authentication feature:

  • IEEE 802.1X must be enabled on the device port.
  • The device must have a RADIUS configuration and be connected to the Cisco secure access control server (ACS). You should understand the concepts of the RADIUS protocol and have an understanding of how to create and apply access control lists (ACLs).
  • EAP support must be enabled on the RADIUS server.
  • You must configure the IEEE 802.1X supplicant to send an EAP-logoff (Stop) message to the switch when the user logs off. If you do not configure the IEEE 802.1X supplicant, an EAP-logoff message is not sent to the switch and the accompanying accounting Stop message is not sent to the authentication server. See the Microsoft Knowledge Base article at the location http:/​/​support.microsoft.com and set the SupplicantMode registry to 3 and the AuthMode registry to 1.
  • Authentication, authorization, and accounting (AAA) must be configured on the port for all network-related service requests. The authentication method list must be enabled and specified. A method list describes the sequence and authentication method to be queried to authenticate a user. See the IEEE 802.1X Authenticator feature module for information.
  • The port must be successfully authenticated.

The IEEE 802.1X Port-Based Authentication feature is available only on Cisco 89x and 88x series integrated switching routers (ISRs) that support switch ports.

Note: Optimal performance is obtained with a connection that has a maximum of eight hosts per port.

The following Cisco ISR-G2 routers are supported:

  • 1900
  • 2900
  • 3900
  • 3900e

The following cards or modules support switch ports:

  • Enhanced High-speed WAN interface cards (EHWICs) with ACL support:
    • EHWIC-4ESG-P
    • EHWIC-9ESG-P
    • EHWIC-4ESG
    • EHWIC-9ESG
  • High-speed WAN interface cards (HWICs) without ACL support:
    • HWIC-4ESW-P
    • HWIC-9ESW-P
    • HWIC-4ESW
    • HWIC-9ES

Note: Module Compatibility with a Specific Router Platform see: Cisco EtherSwitch Modules Comparison
http://www.cisco.com/en/US/products/ps5854/products_qanda_item0900aecd802a9470.shtml

To determine whether your router has switch ports that can be configured with the IEEE 802.1X Port-Based Authentication feature, use the show interfaces switchport command.

Restrictions for IEEE 802.1X Port-Based Authentication

IEEE 802.1X Port-Based Authentication Configuration Restrictions

  • The IEEE 802.1X Port-Based Authentication feature is available only on a switch port
  • If the VLAN to which an IEEE 802.1X port is assigned is shut down, disabled, or removed, the port becomes unauthorized. For example, the port is unauthorized after the access VLAN to which a port is assigned shuts down or is removed.
  • When IEEE 802.1X authentication is enabled, ports are authenticated before any other Layer 2 or Layer 3 features are enabled.
  • Changes to a VLAN to which an IEEE 802.1X-enabled port is assigned are transparent and do not affect the switch port. For example, a change occurs if a port is assigned to a RADIUS server-assigned VLAN and is then assigned to a different VLAN after reauthentication.
  • When IEEE 802.1X authentication is enabled on a port, you cannot configure a port VLAN that is equal to a voice VLAN.
  • This feature does not support standard ACLs on the switch port.
  • The IEEE 802.1X protocol is supported only on Layer 2 static-access ports, Layer 2 static-trunk ports, voice VLAN-enabled ports, and Layer 3 routed ports.
  • The IEEE 802.1X protocol is not supported on the following port types:
    • Dynamic-access ports—If you try to enable IEEE 802.1X authentication on a dynamic-access (VLAN Query Protocol [VQP]) port, an error message appears, and IEEE 802.1X authentication is not enabled. If you try to change an IEEE 802.1X-enabled port to dynamic VLAN assignment, an error message appears, and the VLAN configuration is not changed.
    • Dynamic ports—If you try to enable IEEE 802.1X authentication on a dynamic port, an error message appears, and IEEE 802.1X authentication is not enabled. If you try to change the mode of an IEEE 802.1X-enabled port to dynamic, an error message appears, and the port mode is not changed.
    • Switched Port Analyzer (SPAN) and Remote SPAN (RSPAN) destination ports—You can enable IEEE 802.1X authentication on a port that is a SPAN or RSPAN destination port. However, IEEE 802.1X authentication is disabled until the port is removed as a SPAN or RSPAN destination port. You can enable IEEE 802.1X authentication on a SPAN or RSPAN source port.
  • Configuring the same VLAN ID for both access and voice traffic (using the switchport access vlan vlan-id and the switchport voice vlan vlan-id commands) fails if authentication has already been configured on the port.
  • Configuring authentication on a port on which you have already configured switchport access vlan vlan-id and switchport voice vlan vlan-id fails if the access VLAN and voice VLAN have been configured with the same VLAN ID.
  • By default, authentication system messages, MAC authentication by-pass system messages and 802.1x system messages are not displayed. If you need to see these system messages, turn on the logging manually, using the following commands:
    • authentication logging verbose
    • dot1x logging verbose
    • mab logging verbose

For more/specific details pertaining to Cisco pre-requisites (to be able to integrate SNAC with Cisco devices), please refer to the link below:

http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_usr_8021x/configuration/xe-3se/3850/sec-user-8021x-xe-3se-3850-book/config-ieee-802x-pba.html#GUID-B1C1F75B-45CF-4CA3-A833-43D7C6986249


SNAC Gateway/LAN Enforcement: Failed to receive an authentication reply from the RADIUS server (Reversible Password Encryption Disabled)

$
0
0

SNAC Gateway/LAN Enforcement: Failed to receive an authentication reply from the RADIUS server (Reversible Password Storage Disabled)

Before proceeding further with the discussion of this issue, lets all agree that this issue is not limited to the Symantec NAC Solution. So at no point in time, it need to be perceived that the requirement to enable Reversible Password Encryption for AD is a Symantec specific requirement. May it be Nevis, Napera, Aruba, Bradford, Cisco, Juniper or Forescout, we need a RADIUS implementation that supports ms-chap-v2 to continue to use encrypted passwords. It needs to be an ms-chap hash to compare them. If not, then Windows needs the passwords in to Reversible Password Encryption. How can it know if the right password was put in if its can't get it in its ms-chap native format.

The Store password using reversible encryption policy setting provides support for applications that use protocols that require the user's password for authentication. Storing encrypted passwords in a way that is reversible means that the encrypted passwords can be decrypted. A knowledgeable attacker who is able to break this encryption can then log on to network resources by using the compromised account. For this reason, never enable Store password using reversible encryption for all users in the domain unless application requirements outweigh the need to protect password information.

If you use the Challenge Handshake Authentication Protocol (CHAP) through remote access or Internet Authentication Services (IAS), you must enable this policy setting. CHAP is an authentication protocol that is used by remote access and network connections. Digest Authentication in Internet Information Services (IIS) also requires that you enable this policy setting.

Fulfiling this requirement would stop Enforcer's user.log dialoging its failed attempt to receive an authentication reply from the RADIUS server. This would resultantly stop the RADIUS packets timeing out when the Enforcer forwards the authentication request from the authenticator

You can enable additional secure channel events by changing the following registry key value from 1 (REG_DWORD type, data 0x00000001) to 3 (REG_DWORD type, data 0x00000003) to ensure that issue is completely resolved after making the required changes:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\EventLogging

This issue is mainly seen with:

  1. Network switch with 802.1x enabled, role is Authenticator.
  2. Symantec Network Access Control (SNAC) Enforcer, check the endpoint security and compliance posture.
  3. Remote Authentication Dial-in User Server (RADIUS)   / Network Access Protection (NAP), checks the customer Directory server for the user or computer authentication.

DLP Hot Backups failing after upgrade to 14.x (Oracle to 11.2.0.4)

$
0
0

DLP Hot Backups failing after upgrade to 14.x (Oracle to 11.2.0.4) on tools like netbackup, backupexec, commvault, etc.

The below error is seen:

Failure Reason: ERROR CODE [82:127]:
Network send failed: Software caused connection abort Source: DLPSERVERNAME,
Process: clBackup

In certain cases, even the below error is seen:

ERROR CODE [19:1335]:
Oracle Backup [CVImpersonateLoggedOnUser() failed for oraUser=[protect]
ntDomain=[DLPSERVER] m_hToken=2bc.]
Source: DLPSERVER, Process: ClOraAgent

On further investigation its found the Oracle Home (in the Backup Tool) still pointing to the Old Oracle Home Location. (This is since, as recommended, for DLP we install a new Oracle instance/version and re-point variables). Hence, the Oracle Home needs a change in/for all tools (includes Backups, Monitoring, Tripwire, etc.)

Ora-Capture_2.PNG

Edit the Oracle Path to the New Path = Drive:\Oracle\Product\11.2.0.4\db_1

This should help in resolving both the above listed errors.

() Additional Notes on this Topic:

What is Oracle Home w.r.t DLP?

The Oracle base location is the location where Oracle Database binaries are stored. During installation, you are prompted for the Oracle base path. Typically, an Oracle base path for the database is created during Oracle Grid Infrastructure installation.

To prepare for installation, Oracle recommends that you only set the ORACLE_BASE environment variable to define paths for Oracle binaries and configuration files. Oracle Universal Installer (OUI) creates other necessary paths and environment variables in accordance with the Optimal Flexible Architecture (OFA) rules for well-structured Oracle software environments.

For example, with Oracle Database 11g, Oracle recommends that you do not set an Oracle home environment variable allow OUI to create it instead. If the Oracle base path is /u01/app/oracle, then by default, OUI creates the following Oracle home path:

What are Offline (Cold) Backups w.r.t. DLP

An offline cold backup is a physical backup of the database after it has been shutdown using the SHUTDOWN NORMAL command. If the database is shutdown with the IMMEDIATE or ABORT option, it should be restarted in RESTRICT mode and then shutdown with the NORMAL option. An operating system utility is used to perform the backup. For example, in Unix you could use cpio, tar, dd, fbackup or some third party utility. To have a complete cold backup the following files must be backed up.

  • All datafiles
  • All control files
  • All online redo log files (optional)
  • The init.ora file (can be recreated manually)

The location of all database files can be found in the data dictionary views, DBA_DATA_FILES, V$DATAFILE, V$LOGFILE and V$CONTROLFILE. These views can be queried even when the database is mounted and not open.

A cold backup of the database is an image copy of the database at a point in time. The database is consistent and restorable. This image copy can be used to move the database to another computer provided the same operating system is being used. If the database is in ARCHIVELOG mode, the cold backup would be the starting point for a point-in-time recovery. All archive logfiles necessary would be applied to the database once it is restored from the cold backup. Cold backups are useful if your business requirements allow for a shut-down window to backup the database. If your database is very large or you have 24x7 processing, cold backups are not an option, and you must use online (hot) backups.

What are Online (Hot) Backups w.r.t DLP

When databases must remain operational 24 hours a day, 7 days a week, or have become so large that a cold backup would take too long, Oracle provides for online (hot) backups to be made while the database is open and being used. To perform a hot backup, the database must be in ARCHIVELOG mode. Unlike a cold backup, in which the whole database is usually backed up at the same time, tablespaces in a hot backup scenario can be backed up on different schedules. The other major difference between hot and cold backups is that before a tablespace can be backed up, the database must be informed when a backup is starting and when it is complete. This is done by executing two commands:

Alter tablespace tablespace_name begin backup;
Perform Operating System Backup of tablespace_name datafiles
Alter tablespace tablespace_name end backup;

At the conclusion of a hot backup, the redo logs should be forced to switch and all archived redo log files and the control file should also be backed up, in addition to the datafiles The control file cannot be backed up with a backup utility. It must be backed up with the following Oracle command in server manager:

Alter database backup controlfile to 'file_name';

Sign and Symptoms that your DLP Enforce is overloaded

$
0
0

There are several questions people ask when as a Consultant/Architect you visit them to provide services. There are environment which are adequately staffed in terms of hardware wherein there are those which are not. Sizing concerns occur in both. The local administration teams need a way out in both the cases. Of Couse, the answer is simple if you do not have enough hardware resources, part of the existing resources and workforce as a whole. The answer certainly is, buy more and better hardware which would be the key to a happy life, all in all.

However this article is specifically for environments which already have the required hardware in place. In other words, the RAM, the CPU cores, the hard disk space and everything is pretty much fallen in line. Even in that case, there are performance concerns. Ofcouse there is another known category here which is 'misconfiguration'. Yes, I'm certainly talking about environments that inspect GET traffic instead of just post and places that have policies implemented to look for all traffic irrespective of content going to certain destinations. Not to forget the inappropriate/excessive usage of wildcards. However this article is not even to cover those types.

Here's what I wish to cover under this article - Enough hardware available physically however the DLP Enforce Application/Services not configure to utilize the same effectively.

Lets start with the Signs of Symptoms of this category as a whole:

(a) Takes a long time for Enforce to load

(b) Report generation timing out or taking long

(c) Certain operations/edits timing out or taking long

(d) RSOD (Red screen of death) while performing certain operations/configuration changes

(e) Below log entries in the VontuManager.log (under debug)

•INFO   | jvm 1    | 2017/01/010 07:07:27 | Exception in thread "HeartbeatCheckerTimer" com.vontu.model.DatabaseConnectionException: org.apache.ojb.broker.PersistenceBrokerException: Used ConnectionManager instance could not obtain a connection

•INFO   | jvm 1    | 2017/01/05 06:13:42 | line 1:71: unexpected token: null

•INFO   | jvm 1    | 2017/01/05 06:57:17 | line 1:71: unexpected token: null

•INFO   | jvm 1    | 2017/01/05 06:57:23 | line 1:71: unexpected token: null

•INFO   | jvm 1    | 2017/01/05 07:09:27 | Caused by: org.apache.ojb.broker.PersistenceBrokerException: Used ConnectionManager instance could not obtain a connection

•INFO   | jvm 1    | 2017/01/05 07:09:27 | Caused by: org.apache.ojb.broker.accesslayer.LookupException: Could not get connection from DBCP DataSource

•INFO   | jvm 1    | 2017/02/04 22:40:29 | [org.apache.ojb.broker.accesslayer.ConnectionFactoryDBCPImpl] WARN: Connection close failed

•INFO   | jvm 1    | 2017/02/04 22:40:29 | Already closed.

•INFO   | jvm 1    | 2017/02/04 22:40:29 | java.sql.SQLException: Already closed.

Now lets look at what seems like the Solution, that would enable us to allow more memory (heap size) for JVM in order to resolve all the above Compliants/Symptoms

Now, under Vontu\Protect\Config there is a configuration file for each service which marks the amount of RAM the heap size could use. Extend the below value to leverage some of the existing/unused hardware on the server to improve/tune performance for enforce as a whole:

# Initial Java Heap Size (in MB)


wrapper.java.initmemory = 4096

wrapper.java.maxmemory = 8192

Symantec Endpoint Protection v14.01 (MP1) has been released!

$
0
0

A new year. A new SEP v14 release! :D

Looks like Symantec has been busy with squashing the bugs from the first release of SEP v14 and the list of bugs resolved is impressive. (link below)

I often wait off with the 1st major release (only for testing) until the next minor release and this is now my opportunity to upgrade our production network with this release. Anyone with similar testing experience as mine?

Here are the documents for some light readings:

Symantec™ Endpoint Protection v14 MP1 Release Notes - https://support.symantec.com/en_US/article.DOC9698.html

Supported upgrade paths to Symantec™ Endpoint Protection v14 MP1

https://support.symantec.com/en_US/article.HOWTO81070.html

Symantec™ Endpoint Protection v14 Installation and Administration Guide - https://support.symantec.com/en_US/article.DOC9449.html

Upgrade best practices for Endpoint Protection v14 - https://support.symantec.com/en_US/article.HOWTO125386.html

Symantec™ Endpoint Protection Quick Start Guide - https://support.symantec.com/en_US/article.DOC8227.html

What's new in v14 - https://support.symantec.com/en_US/article.HOWTO124730.html

New fixes and component versions in Symantec™ Endpoint Protection v14 MP1 - https://support.symantec.com/en_US/article.INFO4193.html

Database schema reference for Endpoint Protection 14 - http://www.symantec.com/docs/DOC9438

So where can you grab the latest version from? You can download it from the usual place which is https://symantec.flexnetoperations.com using your serial number (beginning with Mxxxxxxxx) - Please note: you cannot use your existing v12.1 serial number to access this. You will need to use a new serial which was sent out to all existing v12.1 users (Upgrade Notification e-mail) - if you have not received this e-mail, please contact Symantec Licensing Support to get the new serial number.

As mentioned earlier, I’m starting with the planning to migrate to this version very soon. What about you? Are you going to upgrade straightaway, or needs to plan first? Our setup have SVA and it’s now no longer supported, so something to be aware of.

Share your upgrade experience!

Preventing PowerShell from running via Office

$
0
0

Microsoft’s PowerShell has lately been a tool of choice for malware distributors- the trend has only increased since December 2016’s white paper PowerShell threats surge: 95.4 percent of analyzed scripts were malicious.  Too often, end users tricked into opening a malicious attachment will find this powerful tool turned against them.  The ultimate payload downloaded by PowerShell is usually Ransomware.  Once downloaded and run:

***** YOUR FILES HAVE BEING ENCRYPTED *****

Now your organization’s data is lost, unless you have a healthy backup.

 

Application And Device Control: An Excellent Extra Line of Defense

Using Symantec Endpoint Protection’s optional Application And Device Control component, it is possible to prevent malicious Word, Excel or other Office document attachments from accessing PowerShell or cmd.  Here’s a guide illustrating how to craft such a policy yourself….

2017-02-17 10_27_58-10.148.196.246 - Remote Desktop Connection.png

2017-02-17 15_25_33-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_25_50-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_26_37-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_30-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_53-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

Or, find the attached policy that can be implemented and tested in your environment.  Please note that this “Blocking PowerShell.dat” file is provided “as is.” We strongly recommend that it be trialed first in a controlled test environment before applying the policy throughout the organization!  Also note that this is one extra layer of defense- it further reduces the risk f a malware infection, but cannot guarantee eliminating all possibility of damage.

 

More MUST READ Articles and Documents

Hardening Your Environment Against Ransomware

https://www.symantec.com/connect/articles/hardening-your-environment-against-ransomware

Support Perspective: W97M.Downloader Battle Plan

https://www.symantec.com/connect/articles/support-perspective-w97mdownloader-battle-plan

 

REPORT: Organizations must respond to increasing threat of ransomware

https://www.symantec.com/connect/blogs/report-organizations-must-respond-increasing-threat-ransomware

 

Ransomware removal and protection with Symantec Endpoint Protection

https://support.symantec.com/en_US/article.HOWTO124710.html

Best Practices for Deploying Symantec Endpoint Protection's Application and Device Control Policies
http://www.symantec.com/docs/TECH145973

So many Thanks to mick2009 from reviewing this article!

Preventing PowerShell from running via Office

$
0
0

Ultimamente o Powershell tem sido uma das ferramentas escolhidas para distribuição de malware - Essa onda tem crescido de acordo com o WhitePaper de dezembro de 2016. PowerShell threats surge: 95.4 percent of analyzed scripts were malicious.  Frequentemente, os usuários finais são levados a abrir um anexo malicioso e se deparam com uma poderosa ferramenta voltada contra eles. Os payloads baixados pelo Powershell são na maioria Ransomware. Uma vez baixado e executado:

***** YOUR FILES HAVE BEING ENCRYPTED *****

Agora os dados da sua organização estão perdidos. A não ser que você tem um bom backup.

 

Application And Device Control: Uma excelente camada de defesa adicional

Utilizando o recurso de Application and Device Control do Symantec Endpoint Protection é possível previnir que anexos maliciosos do Word, Excel ou outro documento Office acessem o PoweShell ou cmd. Aqui está um guia ilustrado de como fazer a política:

2017-02-17 10_27_58-10.148.196.246 - Remote Desktop Connection.png

2017-02-17 15_25_33-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_25_50-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_26_37-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_30-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_53-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

Ou, Verifique a política em anexo que pode ser testada e implementada no seu ambiente. Recomendamos fortemente que ela seja testada em um ambiente controlado antes de ser aplicada em produção. Note que esta esta é uma camada extra de proteção. Isto reduzirá o risco de infecção de malware, mas, não pode garantir todas as possibilidades de perigo.

 

More MUST READ Articles and Documents

Hardening Your Environment Against Ransomware

https://www.symantec.com/connect/articles/hardening-your-environment-against-ransomware

Support Perspective: W97M.Downloader Battle Plan

https://www.symantec.com/connect/articles/support-perspective-w97mdownloader-battle-plan

 

REPORT: Organizations must respond to increasing threat of ransomware

https://www.symantec.com/connect/blogs/report-organizations-must-respond-increasing-threat-ransomware

 

Ransomware removal and protection with Symantec Endpoint Protection

https://support.symantec.com/en_US/article.HOWTO124710.html

Best Practices for Deploying Symantec Endpoint Protection's Application and Device Control Policies
http://www.symantec.com/docs/TECH145973

Muito obrigado ao mick2009 pela revisão deste artigo!

Security Advisories on SEP 12.1 RU6 MP6 and also SEP v14.0 (6th March 2017)

$
0
0

Just received an alert on an Security Advisories for the following products:

* SEP v12.1 RU6 MP6 and earlier
* SEP v14.0

The security advisories are:

CVE-2016-9093 - Local Privilege Escalation Vulnerability
http://www.securityfocus.com/bid/96294

CVE-2016-9094 - Local Command Injection Vulnerability
http://www.securityfocus.com/bid/96298

If you're on SEP v12, you're strongly recommended to upgrade to SEP v12.1 RU6 MP7. And if you're on v14, you're strongly recommended to upgrade to SEP v14.0 MP1.

Symantec has done an write up on this, which you can find at https://www.symantec.com/security_response/securit...

Get patching, guys/gals! :)


SNAC LAN Enforcement: Switch performance/throughput dropped after enabling 802.1x

$
0
0

Mostly during our SNAC/NAC 802.1x implementations, we used to sign-off the deployment & leave the city the same day. Next day (and this is almost becoming a trend) we get calls/complaints about Switch performance/throughput having dropped considerably after SNAC/NAC deployment. Their gut feeling was always to contact Cisco for a hardware upgrade and that Symantec to provide the input for sizing/hardware enhancement.

Hence, writing this article, to in way highlight the fact, that mostly this issue has/had turned out to be with STP (802.1D) configuration more than a sizing gap.

Please read further details if you're somehow sailing or have had sailed in the same boat:

Problem Stament:

The IEEE 802.1D Spanning Tree Protocol (STP) in part of the Industry since 1985. STP we know is a L2 protocol that runs between bridges to help create a loop-free network topology. Bridge Protocol Data Units (BPDUs) are packets sent between Ethernet switches (essentially multi-port bridges) to elect a root bridge, calculate the best path to the root and block any ports that create loops. The resulting tree, with the root at the top, spans all bridges in the LAN, hence the name: spanning tree.

STP is the most efficient means for preventing loops, atleast with default and most simple configuration settings. Thus, it is easy to not to tune parameters and accept the defaults. This leads STP network without a proper designs and especially when SNAC is implemented and 802.1x is enabled we all are surprised to discover the network issues related to spanning tree.

There are several aspects which could go wrong in terms of STP, however I would like to focus on the most common (default configuration on Cisco Swithces) is the "No Manual Root Bridge Configured"

No Manual Root configuration itself represents lack of STP architecture design. This leaves all switches in the environment using the default root bridge priority of 32768. If all switches have the same root bridge priority, the switch with the lowest MAC address will be elected as the root bridge.

Many networks have not been configured with a single switch to have a lower root bridge priority which would force that core switch to be elected as the STP root for any or all VLANs.

Point to Poner - Isn't it common for the lowest MAC generally to be older/low-end hardware?

Anyways, it is possible that a small access-layer switch with a low MAC address could be the STP root. This situation would add some performance overhead and make for longer convergence times because of the root bridge reelection.

Resolution:

When enabling SNAC & 802.1x configure the core switches with lower STP priorities so that one will be the root bridge and any other core bridges will have a slightly higher value and take over should the primary core bridge fail. Having "tiered" STP priorities configured on the switches determines which switch should be root bridge in the event of a bridge failure. This makes the STP network behave in a more deterministic manner.

 
On the core Cisco switch you would configure the primary root switch with this command:

Switch1(config)# spanning-tree vlan 1-4096 root primary

On the core Cisco switch you would configure the secondary root switch with this command:

Switch2(config)# spanning-tree vlan 1-4096 root secondary

The net effect from these two commands will set the primary switch root bridge priority to 8192, and the secondary switch root bridge priority to 16384.

- If you are facing a congestion issue after NAC deployment even after configuring a manual root - feel free to reach me & I'll try to partner along in helping find a Solution.

SNAC LAN Enforcement: Switch performance/throughput dropped (RSTP not enabled)

$
0
0

It common especially, some of the newer featured to not be configured no the switch. Use of IEEE 802.1D and not Rapid-STP is one such common examples which greatly affects the SNAC Implementations.

This article is the second steps in identifying/fixing performance/throughput issues on a switch after a SNAC deployment. The first step ofcouse is to configure the legacy STP (which is the fall-back/legacy support). For more details on/pertaining to STP, refer to another article below:

https://www.symantec.com/connect/articles/snac-lan-enforcement-switch-performancethroughput-dropped-after-enabling-8021x

The classic IEEE 802.1D protocol has the following default timers: 15 seconds for listening, 15 seconds for learning, 20 second max-age timeout. All switches in the spanning tree should agree on these timers and you are discouraged from modifying these timers. These older timers may have been adequate for networks 10 to 20 years ago, but today this 30 to 50 seconds of convergence time is far too slow especially for SNAC implementations

Today, many switches are capable of Rapid Spanning Tree Protocol (IEEE 802.1w), but few network administrators have enabled it. RSTP vastly improves convergence times by using port roles, using a method of sending messages between bridges on designated ports, calculating alternate paths, and using faster timers. Therefore, organizations should use RSTP when they can. If your organization still has switches that cannot run RSTP, don't worry, the RSTP switches will fall back to traditional 802.1D operation for those interfaces that lead to legacy STP switches.

The 802.1D Spanning Tree Protocol (STP) standard was designed at a time when the recovery of connectivity after an outage within a minute or so was considered adequate performance. With the advent of Layer 3 switching in LAN environments, bridging now competes with routed solutions where protocols, such as Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP), are able to provide an alternate path in less time.

Cisco enhanced the original 802.1D specification with features such as Uplink Fast, Backbone Fast, and Port Fast to speed up the convergence time of a bridged network. The drawback is that these mechanisms are proprietary and need additional configuration.

Rapid Spanning Tree Protocol (RSTP; IEEE 802.1w) can be seen as an evolution of the 802.1D standard more than a revolution. The 802.1D terminology remains primarily the same. Most parameters have been left unchanged so users familiar with 802.1D can rapidly configure the new protocol comfortably. In most cases, RSTP performs better than proprietary extensions of Cisco without any additional configuration. 802.1w can also revert back to 802.1D in order to interoperate with legacy bridges on a per-port basis. This drops the benefits it introduces.

Ref:http://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/24062-146.html

How to use a deployment tool to push packages on a system with System Lockdown enabled?

$
0
0

I would continue from the point where we left with knowing what FILE FINGERPRINT in SEP is and how to generate a FILE FINGERPRINT using the checksum.exe, how to edit, append or merge a FILE FINGERPRINT.

Now lets look at how to configure a SYSTEM LOCKDOWN which is a protection setting that you can use to control the applications that can run on the client computer

Previous Articles:

What is "FILE FINGERPRINT LIST" in Symantec Endpoint Protection (SEP)?
https://www-secure.symantec.com/connect/articles/what-file-fingerprint-list-symantec-endpoint-protection-sep

Is it possible to EDIT, APPEND or MERGE a FILE FINGERPRINT in Symantec Endpoint PRotection Manager (SEPM) ?
https://www-secure.symantec.com/connect/articles/it-possible-edit-append-or-merge-file-fingerprint-symantec-endpoint-protection-manager-sepm

What is SYSTEM LOCKDOWN ? What Stages do I Implement SYSTEM LOCKDOWN in in Symantec Endpoint Protection (SEP) ?
https://www.symantec.com/connect/articles/what-system-lockdown-what-stages-do-i-implement-system-lockdown-symantec-endpoint-protectio

From here, writing this article for a specific use case on System Lockdown. There are various challenges like patch management, remote deployment and support which arise when supporting a system with full system lockdown enabled. Hence, would like to bring up the point of using a deployment to manage, provision & deploy a system which has System Lockdown enabled.

I propose the following strategy for Windows Updates in an Environment with System Lockdown Implemented. Considering most of System Lockdown implementation is already completed here and all we need to do is incorporate working with a deployment tool like LANDesk, SCCM, Tivoli, etc.

  1. Create a Test Group in SEP Manager (might want to call it as Deployment Target Testing or something)
  2. Stop Policy  Inhertance for the Group
  3. Change the System Lockdown Mode to LOG ONLY
  4. Add the Test/Pilot machine(s) to the group
  5. If your deployment tool requires a an agent, push the agent & Reboot if necessary
  6. Push all the Approved Software Pacakages to this system (which might require multiple reboots)
  7. Monitor the Control Log
  8. Gather Checksum for the identified UNAPPROVED applications in the Control Log
  9. Merge/Append the same in the SEP Manager MASTER FILE FINGERPRINT Policy

DLP mail prevent performance

$
0
0

Usually first question for DLP customers who want to used DLP mail prevent is “What will be impact on mail delivery?” Yes, DLP will introduce a latency in mail delivery but I will try to show you with below tests that it will not be noticeable for most end users.

Testing system

All test were performed on a Windows 2012 multi-tier environment and using SYMANTEC DLP v14.5MP1 solution. Mail were generated with a homemade system which has multithreaded mail generator in order to simulate set of mail servers and a “latency-meter” which will receive email after DLP analysis and it will compute global latency introduce by DLP.

System was configured to get a smooth email traffic of 10k email generated in over 40 minutes. Different policies were active on mail prevent using most of DLP detection techniques (DCM, IDM, EDM).

Results

Graphics below shows that traffic was quite flat and around 4 email/s. A single DLP mail prevent server is able to process this traffic without specific issue.

trafic.png

Graph below shows latency measured for all messages generated by the system. For most emails latency is lower than 1s.

latency.png

Higher latency is observed for email with few Mb attachment size. But even there, most of them are processed in less than 5 seconds.

size-latency.png

Apart of latency measurement, we have also checked server resources usage when traffic becomes higher. We observed that CPU is most impacted resource when traffic reach high level (during our test we have reached over 40 email per seconds for our single mail prevent server). Of course, network could also become a bottleneck if traffic became higher than available bandwidth. We did not reach limit in memory usage with our system, but as all software if you reach it you may start to use virtual memory on disk and performance will decrease.

All these tests may not be exactly in same configuration as your environment but it shows that mail prevent server will not induce a latency in your messaging system which could be noticeable by end users.

Access Symantec Encryption Management Server (PGP) via SSH

$
0
0

To gain command line access to a Symantec Encryption Management Server (PGP Universal Server), you will need to create an SSH key. You can do this using a utility such as PuTTYgen to create an SSH key and PuTTY to log into the command line interface.

This article details how to utilize PuTTYgen and PuTTY to access Symantec Encryption Management Server (PGP)

1. Download PuTTY suite or PuTTYgen and PuTTY, from the site below:

http://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

2. Open PuTTYgen.exe, leave the configuration as default, click 'Generate' button:

AccessPGP-01.png

3. Generate some radomness for the key by moving the mouse over the blank area:

AccessPGP-02.png

4. Copy the public key block from Key window where it says 'Public key for pasting into OpenSSH authorized_keys file':

AccessPGP-03.png

5. Click 'Save private key' to save the private key of the key pair you created:

AccessPGP-08.png

6. Log into SEMS management console as a superuser, such as admin, click 'System' --> 'Administrators' --> 'admin':

AccessPGP-04.png

7. Click the plus + sign at the end of the 'SSHv2 Key':

AccessPGP-05.png

8. Select 'Import Key Block', then paste the public key block that copied in step 4, click the 'Import' button:

AccessPGP-06.png

9. After upload the key block, you will notify the hex fingerprint of the key will now show up in 'SSHv2 Key' line.

You can verify that the fingerprint matches the fingerprint found in the 'Key fingerprint' line on PuTTY Key Generator of step 3.

AccessPGP-07.png

9. Click 'Save' button.

10. Open PuTTY.exe, enter the Host Name or IP address of the SEMS, select SSH as the protocol:

AccessPGP-09.png

11. On the left panel, select 'Connection' --> 'SSH', on the 'Private key file for authentication', select the private key file that saved on step 5, then click 'Open' button to start a SSH session:

AccessPGP-10.png

12. The first time you log into SEMS with PuTTY, you will be given a security warning, click 'Yes' button:

AccessPGP-12.png

13. You will be prompted to enter a username, type 'root' and press enter:

AccessPGP-11.png

REMEMBER:

Accessing the server command line for read-only purposes, such as to view settings, logs, etc, is supported. However, performing configuration modifications or customizations via the command line may viod your Symantec Support agreement.

Symantec Endpoint Encryption - Generating and Deploying a Recovery Certificate

$
0
0

Reference: https://support.symantec.com/en_US/article.HOWTO101011.html

Assumptions:

  • Symantec Endpoint Encryption 11.1.2
  • Server 2012 R2 standard
  • Microsoft Active Directory Certificate Services is installed and configured on the domain

Creating the MMC

  1. Log onto the SEE server as a user who has rights to request a certificate.
  2. Click on the Start button, type cmd and hit the enter key.
  3. Type mmc and hit the enter key.
  4. Click on File, Add/Remove Snap-in…
  5. Choose Certificates and click Add >.
  6. Choose My user account and click Finish.
  7. Click OK.

Creating the Certificate

  1. Open or create an MMC with the Snap-in called Certificate – Current User.
  2. Expand Certificates – Current User.
  3. Right click on Personal and choose All tasks, Request New Certificate...
  4. When the Certificate Enrollment wizard starts, click Next.
  5. On the Select Certificate Enrollment Policy page, click Next.
  6. On the Request Certificates page, select Basic EFS and click details and click Properties.
  7. On the General tab, enter a Friendly Name: SEEM Server Recovery Certificate <Date>.
  8. Click on the Subject tab.
  9. Under Subject name, choose Common name and set the SEEM server FQDN as the Value and click Add.
  10. Click on the Extensions tab and click on Key usage.
  11. Click on Data encipherment and click Add >.
  12. Click OK.
  13. Click Enroll.
  14. Click Finish.

Exporting PKCS #12 (Certificate and Private Key)

  1. Open or create an MMC with the Snap-in called Certificate – Current User.
  2. Expand Certificates – Current User, Personal, Certificate.
  3. Double click the certificate that you just created.
  4. Click on the Details tab.
  5. Click on Copy to File…
  6. On the Certificate Export Wizard click Next.
  7. On the Export Private Key page, choose Yes, export the private key and click Next.
  8. On the Export File Format page ensure Personal Information Exchange – PKCS #12 (.PFX) is selected and click Next.
  9. On the Security page, select Password and type in a password and click Next.
  10. Click Browse and select where to save the file and choose a descriptive file name and click Save.
  11. Click on Finish.
  12. Click OK.

Exporting PKCS #7 (Certificate)

  1. Open or create an MMC with the Snap-in called Certificate – Current User.
  2. Expand Certificates – Current User, Personal, Certificate.
  3. Double click the certificate that you just created.
  4. Click on the Details tab.
  5. Click on Copy to File…
  6. On the Certificate Export Wizard click Next.
  7. On the Export Private Key page, choose No, do not export the private key and click Next.
  8. On the Export File Format page ensure Cryptographic Message Syntax Standard – PKCS #7 Certificates (.P7B) is selected, choose Include all certificates in the certification path if possible and click Next.
  9. Click Browse and select where to save the file and choose a descriptive file name and click Save.
  10. Click on Finish.
  11. Click OK.

Deploying the Recovery Certificate to a SEE Client

  1. Log onto the server that hosts the SEE Management Console.
  2. Open the SEE Management Console.
  3. Expand the Symantec Endpoint Encryption Software Setup node and click on Windows Client.
  4. Work your way through the wizard and when you reach the Removable Media Encryption Installation Settings – Recovery Certificate page, choose Encrypt files with a recovery certificate.
  5. Browse to the PKCS #7 certificate and choose Open.
  6. Review the Confirm Certificate window and click OK.
  7. Complete the wizard.

Deploying the Recovery Certificate to GPO Based Policies

  1. Log onto the server that hosts the SEE Management Console as a user who has rights to deploy GPO based policies.
  2. Open the SEE Management Console.
  3. Click on the Group Policy Management node.
  4. Drill down, Forest, Domains, Domain, Group Policy Objects.
  5. Right click on the desired GPO based policy and choose Edit…
  6. Expand Computer configuration, Policies, Software Settings, Symantec Endpoint Encryption, Removable Media Encryption and choose Recovery Certificate.
  7. Choose Change this setting, choose Encrypt files with a recovery certificate and click Change certificate…
  8. Browse to the PKCS #7 certificate and choose Open.
  9. Review the Confirm Certificate window and click OK.
  10. Click Save.
  11. Click OK.
  12. Click File, Exit.

Access Symantec Encryption Management Server (PGP) via SSH

$
0
0

To gain command line access to a Symantec Encryption Management Server (PGP Universal Server), you will need to create an SSH key. You can do this using a utility such as PuTTYgen to create an SSH key and PuTTY to log into the command line interface.

This article details how to utilize PuTTYgen and PuTTY to access Symantec Encryption Management Server (PGP)

1. Download PuTTY suite or PuTTYgen and PuTTY, from the site below:

http://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

2. Open PuTTYgen.exe, leave the configuration as default, click 'Generate' button:

AccessPGP-01.png

3. Generate some radomness for the key by moving the mouse over the blank area:

AccessPGP-02.png

4. Copy the public key block from Key window where it says 'Public key for pasting into OpenSSH authorized_keys file':

AccessPGP-03.png

5. Click 'Save private key' to save the private key of the key pair you created:

AccessPGP-08.png

6. Log into SEMS management console as a superuser, such as admin, click 'System' --> 'Administrators' --> 'admin':

AccessPGP-04.png

7. Click the plus + sign at the end of the 'SSHv2 Key':

AccessPGP-05.png

8. Select 'Import Key Block', then paste the public key block that copied in step 4, click the 'Import' button:

AccessPGP-06.png

9. After upload the key block, you will notify the hex fingerprint of the key will now show up in 'SSHv2 Key' line.

You can verify that the fingerprint matches the fingerprint found in the 'Key fingerprint' line on PuTTY Key Generator of step 3.

AccessPGP-07.png

9. Click 'Save' button.

10. Open PuTTY.exe, enter the Host Name or IP address of the SEMS, select SSH as the protocol:

AccessPGP-09.png

11. On the left panel, select 'Connection' --> 'SSH', on the 'Private key file for authentication', select the private key file that saved on step 5, then click 'Open' button to start a SSH session:

AccessPGP-10.png

12. The first time you log into SEMS with PuTTY, you will be given a security warning, click 'Yes' button:

AccessPGP-12.png

13. You will be prompted to enter a username, type 'root' and press enter:

AccessPGP-11.png

REMEMBER:

Accessing the server command line for read-only purposes, such as to view settings, logs, etc, is supported. However, performing configuration modifications or customizations via the command line may viod your Symantec Support agreement.


Symantec Data Center Security (DCS) Database Archiving

$
0
0

Hi,

This article will discuss how to effectively manage the archiving of the DCS database based on yoru retention needs and / or performance requirements This process allows you to minimise the amount of events stored within your active SCSPDB_[Name] database whilst also maintaining your audit requirements.

This is likely necessary if your DCS environment generates alot of noise, most likely detection events if you're centrally logging events in DCS.

In this example, the customers has an event threshold of say 6 months worth of events, some prevetion but mainly detection.

  1. Connect to your Database instance that hosts your DCS database
  2. Navigate to your database, typically named SCSPDB
  3. Navigate to Tables and run the code below. Change the date to something more appropriate to your environment. Choose a time that is say 1 week in the past, to ensure your agents have checked in, and that your systems have some overlap. Again 1 week should be enough to ensure this, unless you're experiencing some very serious post delay.
    select event_type,count(1) from cspevent cs
    where cs.event_dt <= '2017-03-18 00:00:00.000'
    group by event_type
  4. You will use to verify that your database has been migrated with some integrity. There will be some disparity between the Total events in the archived DB, hence the date below, which is used as a timestamp to ensure some additional integrity.
  5. Record the results for reference later.
  6. Create a backup of the database (manually through SSMS or via your automated backup solution)
  7. Restore your new backup, but name it differently i.e. SCSPDB_Review_JantoJun2016
  8. Run the same code as step 3 on the restored database and ensure the record counts match. If they do not, start from step 3 again and double check the figures  and that the backups processed properly.
  9. If they do, then on the original DB run the Purge Script found under Programmability - SCSP_PurgeEvents (6.5), PurgeEventsByDate(6.6 onwards. An example of the code is shown below for a 6.5 purge script. This will delete all Realtime events, that are older than 7 days and it will delete as many as it can as fast as it can. Change purge limit to say 100,000 if you want to control the performance of the DB / minimise any table locking.
    DECLARE @RC int
    DECLARE @EventCLASS nvarchar(100) = 'Realtime' One of "REALTIME", "PROFILE", "ANALYSIS"
    DECLARE @PurgeMode nvarchar(100) = 'Purge' -- One of "TESTMODE","PURGE" (Testmode will show what will happen but does not actually delete anything, Purge does!)
    DECLARE @FilterMode nvarchar(100) = 'Days'
    DECLARE @FilterValue nvarchar(4000) = '7' -- Number of days to keep (anything older will be deleted)
    DECLARE @PurgeLimit int = 0 --or 100,000 This is a "governor" to limit how many records to delete at once
    DECLARE @Process_Rules varchar(8) = 'P' 	-- Flags indicating processing mode. P print, Q quiet
    
    -- TODO: Set parameter values here.
    
    EXECUTE @RC = [DCSSA_Review].[dbo].[SCSP_PurgeEvents]
       @EventCLASS
      ,@PurgeMode
      ,@FilterMode
      ,@FilterValue
      ,@PurgeLimit
      ,@Process_Rules
    GO
    
    
    
  10. NOTE: In 6.6 onwards the script has changed, and TESTMODE actually purges the events, be careful

Now you'll have a slim line CSP database that is quicker to query and less cluttered, and a review DB that you can use say, direct SQL and / or SSRS to create KPIs and / or analsys on the data. Tip: You can extract the SQL from the CSP reports in the Java Console and use that to query directly or via SSRS etc.

Any questions, let me know.

Thanks,

Kevin

How to point DCS Server to migrated SQL database.

$
0
0

After the SQL database has been migrated to a new instance the "server.xml" file will need to be updated with the new database information. The default location of this file on the DCS Server is as follows:

"C:\Program Files (x86)\Symantec\Data Center Security Server\Server\tomcat\conf\server.xml"

There will be three lines in this file that each begin with a "<Resource auth=" tag.

<Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver"
    factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" initialSize="25" logAbandoned="true"
    maxActive="75" maxIdle="50" maxWait="30000" minEvictableIdleTimeMillis="55000" minIdle="25"
    name="Database-Console" password="1234567890abcdefghijklmnopqrstuvwxyzABCD"
    removeAbandoned="true" removeAbandonedTimeout="300" testOnBorrow="true"
    timeBetweenEvictionRunsMillis="34000" type="javax.sql.DataSource"
    url="jdbc:jtds:sqlserver://192.168.1.223/SCSPDB;instance=scsp;integratedSecurity=false"
    username="scsp_ops" validationInterval="34000" validationQuery="SELECT 1"/><Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver"
    factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" initialSize="125" logAbandoned="true"
    maxActive="425" maxIdle="175" maxWait="30000" minEvictableIdleTimeMillis="55000"
    minIdle="125" name="Database-Agent" password="1234567890abcdefghijklmnopqrstuvwxyzABCD"
    removeAbandoned="true" removeAbandonedTimeout="300" testOnBorrow="true"
    timeBetweenEvictionRunsMillis="34000" type="javax.sql.DataSource"
    url="jdbc:jtds:sqlserver://192.168.1.223/SCSPDB;instance=scsp;integratedSecurity=false"
    username="scsp_ops" validationInterval="34000" validationQuery="SELECT 1"/><!-- UMC DB Resource --><Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver"
    factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" initialSize="34" logAbandoned="true"
    maxActive="277" maxIdle="233" maxWait="30000" minEvictableIdleTimeMillis="55000" minIdle="89"
    name="Database-UMC" password="1234567890abcdefghijklmnopqrstuvwxyzABCD"
    removeAbandoned="true" removeAbandonedTimeout="300" testOnBorrow="true"
    timeBetweenEvictionRunsMillis="34000" type="javax.sql.DataSource"
    url="jdbc:jtds:sqlserver://192.168.1.223/dcsc_umc;instance=scsp;integratedSecurity=false"
    username="umcadmin" validationInterval="34000" validationQuery="SELECT 1"/>

The hostname/IP of the SQL Enterprise Database will need to be updated on the "url=" portion of these three lines as follows:

url="jdbc:jtds:sqlserver://192.168.1.223/SCSPDB;instance=scsp;integratedSecurity=false"

In the above example the "192.168.1.223" entry will need to be updated to the new hostname/IP of the migrated database.

Please note that before modifying the "server.xml" file that the DCS Services should be turned off. (See the image below for reference):

Turn off DCS Services.png

Once the "server.xml" file has been successfully modified, the services can be turned back on and the DCS server's database should be properly migrated.

How to collect and add fingerprint of any app or location to SEP manager (Graphical)

$
0
0

Hi all,

In this article, I will explain the procedure to collect file fingerprint of any file or location within the system and add the same to Symantec Endpoint Protection Manager.

So, Let's get started.

Step 1: Go to Local Drive > Program files(x86) > Symantec > Symantec Endpoint Protection.

You will find Checksum.exe in this folder, that we will use to collect file fingerprint.

Step 2: Press and hold Shift Key and right click in empty location (Follow below screen shot) and select Open Command Window Here

Screenshot_1_0.png

Step 3: It will then open the command window at this location. 

Screenshot_2_0.png

Step 4: Now suppose you want to collect file fingerprints of every file from your computer's particular drive (in this case I have selected D drive)

Step 5: 

a. In this window type Checksum.exe or simply type "Ch" without quotes and hit Tab, this will automatically select Checksum.exe from this location.

b. Now type the name of the file which will save the file fingerprint data into a text file. In this example I have given a file with name output.txt You can give any name to this file followed by .txt extension for text file.

c. So the command until now is - Checksum.exe output.txt (There is a space between checksum.exe and output.txt)

d. Next is to select the drive name or the file path of which, we need to collect file fingerprint. So type "D:\" with quotes.

e: So the complete command to collect file fingerprint of all files from D drive is - 

Checksum.exe output.txt "D:\" (There is a space between checksum.exe and output.txt and "D:\")

Screenshot_3_0.png

f: Hit enter and it will start collecting the file fingerprints from D drive as shown below -

Screenshot_4_0.png

Step 6 : After the process completes the window will get close automatically and the output file will have the list of file fingerprints of files from D drive.

Screenshot_7_1.png

Another example : collect file fingerprint of Google chrome (executable file)

Step 1 : Right click on the Google Chrome icon and select properties, Now click on shortcut tab and copy the target path which is for chrome.exe

Screenshot_5_0.png

Step 2: Type the command as - Checksum.exe output.txt "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"

Screenshot_6_0.png

Step 3: Hit enter and it will immediately collect file fingerprint of chrome.exe and store that in output.txt file (see screen shot)

Screenshot_8_0.png

Adding the output fingerprint file into SEPM.

Step 1: Open the Symantec Endpoint Protection Manager and go to Policies > Policy Components > File Fingerprint Lists and click on add a file fingerprint list

It will open Add File Fingerprint Wizard, Click Next

Screenshot_9_0.png

Step 2: Put name and description of the file.

Screenshot_10_0.png

Step 3: Hit next when you get below screen.

Screenshot_11_0.png

Step 4: Browse the path to the output.txt file.

Screenshot_12_0.png

Step 5: Hit Next and the file will get added to the SEPM.

Screenshot_13_0.png

Screenshot_14_0.png

Step 6: Hit Finish and you will see the fingerprint file will get saved in SEPM.

Screenshot_15_0.png

Thanks,

nThakare :)

Ransomware Discovery

$
0
0

Hi All,

Theseday we are hearing many cases of ransomware infection which is not only badly impacts bussiness but also the crticial data. As this virus encrypt the sensetive data with private key genrated from C2C or from attacker server. The way Ransomware enters into the network and infect the critical servers silently the installed antivrus also not able to detect proactively. I have been worked on couple of Ransomware virus attack therefore sharing my experince as well as little research, history, best practices and prevention methodology. This arctilce more focused on Ransomware Discovery and next article will be focused on prevention methodology. I am trying to answer all WH question related to ransomware.

Ransomware History and Trend

Ransomware is malware that encrypts a user's files-folder and often deletes the original copy if ransom (money) is not paid to attacker to get decryption keys.

ransomware1.jpg

Trend

Ranomware 2.jpg

Why ransomware target businesses?

  • Attackers are aware of that ransomware can create major business disruptions therefore it will increase their chances of being paid more.
  • Computer in companies are prone to vulnerabilities, which can be exploited through technical means and social engineering tactics.
  • cyber criminals also know that business not report ransomware attacks for avoid legal or reputation consequences;

What are most common methods used by ransomware to come in?

  • Plenty of Spam email with malicious links or attachments are sent as part of offer or notification campaigns 
  • vulnerable software exploited
  • Botnets;
  • Self-propagation (spreading from one infected machine to another);

Ranomeware 3.png

Why Ransomware get undetected?

  1. Ransomware start communication with Command & Control servers is encrypted 
  2.  Browser or method like TOR , Bitcoin used to avoid tracking by law enforcement agencies
  3. Anti-sandboxing technique used so antivirus won’t detect as abnormal process;
  4. Encrypted payloads make difficult for antivirus to scan as malware,
  5. Polymorphic behavior of ransomware has ability to alter and create a new variant,
  6. Ransomware has the ability to remain dormant 

How to install SEPM 14 MP1 with embedded database (Graphical)

$
0
0

Dear all,

This tutorial will give the overall ideal on how to install newly available Symantec Endpoint Protection 14 MP1 with embedded database.

So lets get started -

Step 1 - Download and extract the SEP 14 MP1 package and then run the setup.exe with an administrator

Screenshot_1.png

Step 2 - The installation will begin, hit Next

Screenshot_2.png

Step 3 - Accept the license agreement and click Next

Screenshot_3.png

Step 4 - It will automatically select the below location for the SEP manager install (check if you have enough disk space available) OR change the

Install directory

Screenshot_4.png

Step 5 - The setup is now ready to install, click Next

Screenshot_5.png

Step 6 - The setup will now get installed and this will copy all the required files to said location

Screenshot_6.png

Step 7 - Setup is now installed and we now need to configure the management server, click Next

Screenshot_7.png

Step 8 - You will see the Management server configuration wizard splash screen

Screenshot_8.png

Step 9 -Select the appropriate configuration type.

Note : The default configuration is for the new installation which will consists of clients below 500

The custom configuration will let you select the customize options to configure like selecting the SQL database for managing the SEP clients and its database

In this case we are going for default configuration which will by default select embedded database.

Screenshot_9.png

Step 10 - In this page, you need to fill the details like -

1. Company Name - Enter your company name

2. User name - This will be used as a username while login into SEPM console

3. Password - Enter the password, this will be used while authenticating the user in SEPM console login (you can change this password anytime from SEPM) and also as a database password

4. Confirm password - Enter the password same as above

5. Email address - Enter the email address of the administrator who might want to get password recovery emails and notifications from SEP manager

6. The emails will be send to registered email ID only if you add a email server into SEPM (contact your IT team for the same)

Rest is self explanatory.

 Screenshot_10.png

Step 11 - Uncheck the Live update installation as it will take several hours to download and install the definitions (you can download it later)

partner information is optional, you can fill that if you feel necessary and hit Next

 Screenshot_11.png

Step 12 - The database will get created in the specified location, as this will take lot of time so be calm and let it happen.

If there's an issue it will definitely throw an error that you use to troubleshoot further.

Screenshot_12.png

Step 13 - After the database builds, the configuration is also completes. Hit Finish to launch the SEP manager.

Screenshot_13.png

Step 14 - After you login to SEPM using username and password, your SEPM now installed and you can now install SEP client on your systems.

Screenshot_14.png

Thanks,

nThakare :)

Viewing all 397 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>