JIRA Export Import Project Issues

This is a topic that seems to be poorly documented – exporting and importing projects between JIRA instances. There is an entire system backup, but the documentation makes it clear that restoring individual projects from the system backup is not fully supported. The best method our team has come up with is to use the CSV export capability of the JIRA issue search functionality – this export is designed to match the CSV “external system” import for JIRA projects. Both issues and associated comments can be exported in this way. The default field names for both the export and import are likely to be acceptable for most users. The project must be created on target JIRA prior to issue and comment import. If you have trouble importing, you may need to configure the new target project to accommodate any special issue fields or project settings required for your issues and comments to import. Here is some related Atlassian documentation:

Good luck importing and exporting project issues with JIRA. It’s strange to me that native JIRA issue import/export or project import/export are not supported by Atlassian other than this generic CSV capability. It may be a little rough, but the process works reliably and is similar for importing or exporting data from other issue tracking systems missing from the external system import list in JIRA.

Posted in System Administration | Tagged , | Leave a comment

Exchange Offline Address Book Troubleshooting

The Exchange Offline Address Book (OAB) for Outlook clients can be a difficult beast to troubleshoot. When it’s not working correctly, Outlook clients may fail to report an error and continue to synchronize stale Global Address List records. Here are some hints for successful resolution of OAB issues – tested with Exchange 2010.

  • Try following the Corelan document: Fixing Exchange 2007 Offline Address Book generation (oalgen) and distribution issues
    • This applies equally well to Exchange 2010 although some file paths may have changed slightly.
  • Before intensive troubleshooting, try restarting the “MSExchangeSA” System Attendant service on the server responsible for generating the Offline Address Book. If Windows Updates are pending reboot, just reboot Windows instead.
  • Following a clean restart of the System Attendant on your OAB Generation Server, use the Exchange Management Console (EMC) to manually start an “Update” of the OAB under Organization Configuration – Mailbox – Offline Address Book.
    • Status of the Offline Address Book generation should show in the Windows “Application” Event Log with Source “MSExchangeSA” – wait for indication that the OAB updates have completed.
  • If the OAB generation completes successfully, you can force synchronization to each CAS server by Restarting the “MSExchangeFDS” File Distribution Service on each CAS server. Review the Windows “Application” event log for messages from Source “MSExchangeFDS” that will indicate whether the new OAB has finished copying.
    • Default “polling” interval for FDS is 8 hours (480 minutes) – this is a long time to wait for testing!
  • Review the OAB file dates. Many should have been updated within the last 24 hours. Your system may use slightly different paths.
    • On the OAB Generation Server under C:\Program Files\Microsoft\Exchange Server\V14\ExchangeOAB\*\*.* (these are created by the System Attendant)
    • On the CAS servers that distribute your address book under C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\OAB\*\*.* (these are created by the File Distribution Service)

There is a lot more to this troubleshooting than I’m mentioning here. I highly recommend you read and follow the above linked Corelan document – it was very helpful to me for resolving this issue.

Posted in System Administration | Tagged , | Leave a comment

Cisco ASA Command Line Basics

This post is for people who are new to the Cisco ASA command line, or seasoned network administrators like myself who need to review or brush up on the command line basics for the ASA console. Instead of using my own words, I’ll just refer to the official documentation as a good reference and intro.

This is a great resource because it covers many common tricks and techniques to make the best use of your ASA device. CLI skills obtained from this article help us quickly accomplish complex tasks rather than stumbling through basic device configuration.

Posted in Networking, System Administration | Tagged , , | Leave a comment

AES-GCM on Cisco ASA

This is a request for comments to clarify network security proper usage of new AES-GCM cryptography functionality on the Cisco ASA platform. Please leave a comment if you can provide some insight to help readers better informed on how and when to use AES-GCM with the Cisco ASA. I’m using the documentation for reference: “CLI Book 3: Cisco ASA Series VPN CLI Configuration Guide, 9.7 – Chapter: IPsec and ISAKMP.” Quotes that I’m trying to decipher from the document follow:

  • “When AES-GCM is specified as the encryption algorithm, an administrator can choose null as the IKEv2 integrity algorithm” – under section “IKE Policy Keywords and Values”
  • “You must choose the null integriy algorithm if AES-GCM/GMAC is configured as the encryption algorithm” – under section “Create Static Crypto Maps”
  • “You must choose the null integrity algorithm if AES-GCM/GMAC has been configured as the encryption algorithm” – under section “Create Static Crypto Maps”

For the ikev2 policy, this is fairly simple because the only allowable integrity method is “null” when encryption type is set to aes-gcm-*. Because the software enforces this requirement, there is no need to clarify the ambiguity in the documentation.

For the ipsec ipsec-proposal, this still remains confusing to me. When the esp encryption is set to aes-gcm-*, the ASA software allows configuration of integrity types other than “null” … this seems to be in violation of the documentation which states “YOU MUST CHOOSE THE NULL INTEGRITY ALGORITHM [when encrypting with aes-gcm-*].” There is an example further down in the documentation that seems to suggest the correct usage of aes-gcm proposal with null integrity algorithm specified.

ciscoasa(config)# show run crypto ipsec 
 crypto ipsec ikev2 ipsec-proposal GCM 
 protocol esp encryption aes-gcm
 protocol esp integrity null

I think I may need to fall back on a more basic question like: “What is the AES-GCM cipher is and why would I want to use it?” According to a Wikipedia article, the Galois/Counter Mode (GCM) is “adopted because of its efficiency and performance and is “designed to provide both data authenticity (integrity) and confidentiality. This would explain why we use “null” integrity under ESP configuration – the integrity is already built into the encryption algorithm. Any integrity method specified for a GCM ESP session would be adding overhead because GCM is already providing message authentication for the encrypted channel. Because the ASA allows an integrity method to be specified here, I think this is allowing users to mis-configure the device ESP and establish tunnels with performance-reducing double integrity checks.

Update for ASA 7.1 software. When configuring aes-gcm* for esp, there is an informative message regarding esp integrity selection: “WARNING: GCM\GMAC are authenticated encryption algorithms. esp integrity config is ignored.” This helps explain that if the peers negotiate gcm for the security association, then none of the configured integrity methods will be used.

Regarding why we want to use it, the same article states: “GCM can take full advantage of parallel processing.” This is the real reason we want to use GCM – it’s FASTER THAN TRADITIONAL AES ON MULTI-CORE NETWORK DEVICES! There appears to be no real security advantage of AES-GCM over AES-CBC. The original AES Cipher Block Chain was designed as a sequential algorithm best suited to execution in a single process (single-core hardware utilization). This distinction makes traditional AES a good choice on single-core devices. AES-GCM should be considered for performance and throughput improvements on multi-core network hardware. Even on systems with less cores, GCM may be a good choice if a multi-core or pipelined cryptography accelerator module is used to provide high-performance gcm cipher features for your IPSec traffic.

A side note, the Wikipedia article also explains that GMAC is “an authentication-only variant of the GCM.” In my opinion, Cisco should make it clear to customers that GMAC DOES NOT ENCRYPT YOUR DATA!!! Because of this, I recommend that Cisco ASA users avoid the use of the GMAC non-cipher. Good news is that gmac is not available under your ikev2 policy. Unfortunately the gmac is a selection under your ipsec-proposal, so please avoid it if you want your data to remain private!!

Posted in Networking | Tagged , , , | Leave a comment

SSSD-AD TGT failed verification

Users of RHEL 7 and CentOS 7 on Windows Active Directory networks are likely enjoying the benefits of using the SSSD-AD domain-join client module along with the Realmd tool which facilitates proper management of SSSD client configuration (a very complex task).

Unfortunately, with Enterprise domain services like Active Directory (AD), there are MANY things that can go wrong (Murphy’s Law). Every AD domain is unique because the AD is highly customizable. One such thing that can go wrong is Kerberos Ticket Validation. TGT verification failure is such a common issue that Red Hat has dedicated a (customers-only) solution page to it: SSSD user logins fail due to failed TGT validation.

As of March 2017, their suggestions failed to mention a solution that I needed on one of my systems. I’m recording the issue I saw here so I can refer back to it.

  • Set debug_level to a high value like 6 for all sections in your sssd.conf file
  • Restart sssd “systemctl restart sssd”
  • Attempt to login to your system with a domain user account
  • Review the contents of /var/log/sssd/krb5_child for indicators of this problem
    • validate_tgt” line like “TGT failed verification using key for [host/your.fqdn@YOUR.REALM]
    • Additional lines like “Server not found in Kerberos database
  • I’m not sure if you can get more useful log output by setting the debug level higher – this was the most detail I found before solving the problem
  • CHECK THE DOMAIN FOR DUPLICATE SPN’s
    • Duplicate Service Principal Names will be indicated in the output of Windows command “setspn -X” when run by a domain admin user.
    • If duplicate SPN’s are found, they must be resolved
      • EITHER delete unused duplicate objects from AD
      • OR leave domain (realm leave) – change hostname (nmtui) – reboot – rejoin domain (realm join) with Linux client
  • This security validation of the Ticket Granting Ticket (TGT) is controlled by the setting “krb5_validate” (true or false) in the domain-specific section of your “sssd.conf” file. You can change this setting and restart sssd to test whether the TGT validation checks are causing your issue. There are many factors that may cause this validation to fail – Duplicate SPN is one such issue. For other possibilities, please review the above linked Red Hat solution, or try your luck with Google :-).
  • Once the underlying issue is resolved, set krb_validate back to true

I hope this is helpful to someone experiencing the same issue. As we can see here, a cleanup of the Active Directory infrastructure resolves the problem (removal of duplicate SPN in this case). Other basic infrastructure services should all be in place and functioning correctly to include: Forward and Reverse DNS Name Resolution, Secure Dynamic DNS Name Registration, Domain Time Synchronization (PDC authoritative, automatic to Windows domain members, NTP/chronyd to Linux clients)

Posted in Linux, System Administration | Tagged , , , , | Leave a comment

Get Rid of virbr0

In RHEL 7.x and CentOS 7.x you may see an odd extra network interface listed as “virbr0” (virtual bridge zero). This is provided as a default way to share the host physical network with private guest virtual machines. Unfortunately this interface is default assigned an IP and shows up as an active interface even when you’re not running any virtual machines.

If you’re not using the RHEL/CentOS virtualization feature, I recommend you turn off libvirtd which will get rid of this odd extra interface. This may be particularly useful to Enterprise Domain users who would like to prevent this Non-Routable IP from being registered in the organization’s DNS Infrastructure causing reachability and name resolution trouble for network users.

  • systemctl disable libvirtd
  • systemctl stop libvirtd
  • shutdown -r now

After a reboot, the virbr0 will be gone from your system and the network configuration will be clean again. I’m not sure if there is any easy way to reload the network without rebooting – feel free to comment if you have a reliable supported way to finish this task without rebooting the system.

If you’re making use of the virtualization functionality, you can still restrict which interfaces are used for Dynamic DNS registration. Use the “dyndns_iface” option in your “sssd.conf”.

Posted in Linux, Networking, System Administration | Tagged , , , | Leave a comment

Jetty Dropped AJP Support. Use Tomcat

Apache Tomcat ships with an optimized load balancing / reverse proxy protocol known as AJP or more formally as the Apache Tomcat Connector specification. This makes Tomcat a top choice among the many Java Servlet and JSP web app containers available. There was limited AJP support in Jetty (Eclipse Java Web Server), but this has been removed in the current Jetty 9.x rewrite. Jetty 9.0 stable was first released in March of 2013. Because of this, I recommend that multi-tier Java web app deployments consider the use of Apache Tomcat as the app container, and Apache Httpd with built-in mod_proxy_ajp as the load balancer web front end + ssl/tls layer (provides reverse proxy services using optimized AJP to your Tomcat instances).

In the past, Apache Httpd users were confused by the lack of support for mod_jk, an AJP module for httpd which usually needed to be compiled from source. Today we can forget mod_jk and use the distribution packaged and supported mod_proxy_ajp which provides the same capabilities as mod_jk without the headache of attempting to compile the mod_jk source into distribution-provided httpd software. Debian, Ubuntu, RHEL, and CentOS all provide mod_proxy_ajp packages with the essential security and bug-fix updates through apt and yum vendor-supported repositories. In order to convert from a plain HTTP-based reverse proxy configuration to the more tightly-integrated AJP, it’s as simple as changing the ProxyPass entries in your Apache Httpd configuration file(s). Here’s and example.

  • Old HTTP-based reverse proxy configuration sample
    • ProxyPass /context http://servername:8760/context
    • ProxyPassReverse /context http://servername:8760/context
  • New AJP-based reverse proxy configuration sample
    • ProxyPass /context ajp://servername:8761/context
    • ProxyPassReverse /context ajp://servername:8761/context

I understand there is an ongoing (eternal?) religious debate regarding the relative merits and disadvantages of HTTP vs AJP reverse proxy usage for Java web apps. There are two reasons I consider AJP to be a better option than plain HTTP reverse proxy solutions:

  1. When AJP is used, the Java web app receives the original headers from the front end web server as if the request was directly received from the original client. This is a HUGE ADVANTAGE! There is a little overhead required to make a reverse proxy request between front end and back-end while also passing along the original headers, but it IS TOTALLY WORTH IT because your app will be able to process the real request without elaborate workarounds attempting to configure the app to respond with front-end URL’s while ignoring the confusion introduced by reverse proxy HTTP request headers. If you use HTTP reverse proxy with Java web apps, you know what I’m talking about!
  2. AJP is proxy-aware, by this I mean that the front-end web server (load balancer / reverse proxy) and the back-end app server are both aware and actively participating in a proxy (and possibly load balancing) relationship. This enables the passing of original headers, direct monitoring of app server status, advanced native load balancing capabilities (monitored add or remove of app server nodes to the balanced set, true load-based connection assignment at front-end). These capabilities add a little overhead to the AJP protocol, but the added native load balancing and reverse proxy capabilities are definitely worth it in my opinion.

Not everyone will agree with this opinion. AJP is not an Internet Standard but rather an Apache Tomcat protocol specification that enhances the interaction between a front-end web server and any back-end Java app server(s). Even so, I feel that Java web app deployment teams and developers should Strongly consider AJP as a reverse proxy and load balancing solution of choice as long as Tomcat and Apache Httpd or other Web and App Server solutions are supporting this valuable multi-tier enhancement for better web app deployment capabilities.

Posted in Linux, System Administration | Tagged , , | Leave a comment

Tomcat Multiple Instances RHEL 7 CentOS 7

This is a follow-on to my earlier posts Apache Tomcat in RHEL 7 and RHEL 7 Administration Notes. This builds on the goal to use the system packaged Tomcat and Java software in order to receive security and bug fix updates from when the distribution posts those to the supported yum repositories. In this case, we’re leveraging the advanced “Multiple Tomcat Instances” capability included with Tomcat 7 as packaged by Red Hat for RHEL 7 and CentOS 7 Linux Distributions. It appears that the RHEL team has done the work to ensure that the supported package does enable multiple Tomcats if you follow a combination of the offical Apache Tomcat 7 “RUNNING” document (linked above) along with some of the comments provided with the RHEL Tomcat 7 configuration files. There is not a single place where this documentation was combined in a practical way, so here are my brief notes to help tie it all together.

  • REPLACE all occurrences of “INSTANCE” below with your true instance name. An example instance name might be “geoserver.”
  • Global configuration for ALL Tomcat Instances is provided in /etc/tomcat/tomcat.conf – any changes made here will be reflected across your entire “cat farm.”
  • Instance-Specific systemd (systemctl) Tomcat services are defined by creating new files like /usr/lib/systemd/system/tomcat@INSTANCE.service – copy the existing /usr/lib/systemd/system/tomcat@.service template file and alter for your instance. You can modify the instance Description, User and Group here.
  • Instance-Specific Environment Variables are defined by creating new files like /etc/sysconfig/tomcat@INSTANCE – copy existing /etc/sysconfig/tomcat. By default everything is commented out in this file. Use conf/server.xml changes listed below to set the ports used when a given instance of Tomcat runs.
    • To SET MAXIMUM MEMORY for specific tomcat instance, place a new line in /etc/sysconfig/tomcat@INSTANCE like the following for 8G max
    • CATALINA_OPTS="-Xmx8G"
  • Instance-Specific CATALINA_BASE directory is pre-defined to go under /var/lib/tomcats/INSTANCE – note the “Mimic” below means (use similar permissions scheme, expect similar contents when service is running).
    • Create a “conf” directory here
      • Should contain a COPY of contents of /etc/tomcat
      • CHANGE “server.xml” here to reflect instance-specific ports!!
    • Create a “logs” directory here
      • Mimic /var/log/tomcat
      • Add /etc/logrotate.d/tomcat@INSTANCE as needed
    • Create a “webapps” directory here
      • Mimic /var/lib/tomcat/webapps
      • Drop your app WAR files or app context directories here to deploy.
    • Create a “work” directory here
      • Mimic /var/cache/tomcat/work
    • Create a “temp” directory here
      • Mimic /var/cache/tomcat/temp
  • If SELinux is enabled, you may need to set the correct security context on the above instance-specific folders and files in addition to setting the instance-specific permissions based on the user & group the service runs as.
  • Manage with the standard systemctl commands like:
    • systemctl status tomcat@INSTANCE
    • systemctl enable tomcat@INSTANCE # enable start-on-boot
    • systemctl start tomcat@INSTANCE # start instance right now
    • systemctl stop tomcat@INSTANCE # stop instance right now

Good luck with your multi-instance Tomcat project on RHEL 7 or CentOS 7. By using a pattern similar to this, you should be able to take advantage of Red Hat provided security and bug-fix yum updates for your Java Servlet and JSP web apps. If you’re using the single default instance of Tomcat, see my earlier post linked at the top of this article.

Posted in Linux, System Administration | Tagged , , | Leave a comment

SharePoint API Invoke-RestMethod PowerShell

Invoke-RestMethod was introduced in PowerShell 3.0, but unfortunately there is a bug that prevents the user from setting the “Accept” header. The SharePoint 2013 REST API requires special values applied to the Accept header to return common items like list data. A typical example to retrieve contents of a list might look like:

  • $restData = Invoke-RestMethod -UseDefaultCredentials -Headers @{ "Accept" = "application/json;odata=verbose" } "http://server/site/_api/lists/getbytitle('listname')/items"

If you’re running a version of PowerShell older than 4.0, this will fail to set the Accept header and will return an error. If you attempt to retrieve the data without the header, it will be incomplete (missing actual data). PowerShell 4.0 is included with Windows 8.1 and Server 2012 R2. You can download “Windows Management Framework (WMF) 4.0” for Windows 7 and Server 2008 R2 (Win7 or 2008 R2 = Windows NT 6.1). Newer versions of PowerShell should also contain the fix.

Another nasty bug related to PowerShell handling of SharePoint JSON list data – default list properties include DUPLICATE “Id” and “ID” fields. If you attempt to convert a JSON string with these duplicates using ConvertFrom-Json, it will fail with an error. One rough workaround is to replace all the duplicate Id’s with a Non-duplicate Id. Another better workaround would be to explicitly use ODATA “$select” to choose only specific columns for SharePoint to return (eliminating the duplicate without rough string replace). The real trouble is with PowerShell JSON de-serialize implementation – it creates properties that must be case-insensitive (because ALL Identifiers in PowerShell are NOT case sensitive). Here’s an example of rough conversion of the duplicate key property (using $restData from above). Inspired by @JPBlanc answer to “ConvertFrom-Json … contains the duplicated keys.” (stackoverflow.com)

  • $json = ($restData -creplace '"Id":', '"Idx":') | ConvertFrom-Json

I won’t provide an ODATA $select example here, but there should be plenty available with a quick Google search. Another advantage to the $select approach is an improvement in performance – only sending the data you actually want to use.

Posted in System Administration | Tagged , , | Leave a comment

Subversion Windows Deduplication Bug

Windows Server 2012 introduced a valuable Data Deduplication feature to reduce physical storage needs on storage volumes where muliple copies of identical data reside. Unfortunately, the way this feature is implemented is visible to client programs in the form of “reparse points”. Older uses of reparse points included a symbolic link feature similar to that found on Unix and Linux systems. Because symbolic links may require special treatment, shared libraries like the Apache Portable Runtime (APR) detect most reparse points as symbolic links even if in reality these may just be Windows Deduplicated files (storage-optimized files). The popular file version control system “Subversion” relies on the APR library to handle files in a users working copy of project files. This APR behavior causes Subversion to treat deduplicated files on Windows as symbolic links which receive special treatment as either text files with symlink path (Unix/Linux created symlink), or as unsupported entities (Windows created symlink). In either case these don’t match the necessary behavior for Subversion to ignore the reparse point and treat the file as normal (not a symlink).

Related bugs include these issues which remain open as of 4 Jan 2017. Additional discussion of Subversion symlink behavior is posted on the Subversion FAQ.

  • NTFS Reparse Points are treated as [Unix/Linux] APR_LNK, only correct for junction/dir link, APR Bug # 47630.
    • Resolution of the APR bug might result in an acceptable fix for Subversion to permit windows-deduplication of a working copy.
  • Add support for Windows symlinks (junction points), SVN-3570 (issues.apache.org)
    • While unrelated to the deduplication problem, this bug does mention the same “Symbolic links are not supported on this platform” error that users will see if deduplication reparse points are detected in the working copy during a commit / check-in.

Until a fix is available, users need to avoid storing any working copy of subversion projects on a windows deduplicated volume. If files in a working copy become deduplicated, the resulting reparse points may lead to corruption of properties in the repository. Specifically the “svn:special” property will be set to “*” indicating the file as a symbolic link. Other users checking out or updating these marked files may receive partial (corrupted) files in working directory due to an apparent assumption by the SVN client that the file is a text representation of symbolic link – files may be truncated? Removing the special property from the affected files in repository and checking out a fresh copy should resolve the issue, but the corrupt working copy should be abandoned (delete after backing up any changed / uncommitted files). To avoid checking in a corrupt file, I recommend modifying the file properties with svnmucc or another tool like TortoiseSVN Repository Browser that can change properties directly against the repository URL without relying on a working copy. To view all file properties in a repository, use a command like “svn proplist …”

In order to resolve these problems, I have created Powershell scripts to assist in performing the following maintenance tasks:

  • detect and remove deduplication on working copies (working copy location must also be added to exclusion list in windows deduplication settings).
  • detect and remove svn:special property from files directly from repository using svnmucc

Unfortunately I don’t have time to clean up and post sample code at the moment but I hope to return in the future to add the powershell sample scripts.

Posted in System Administration | Tagged , , , , | Leave a comment