Jetty Dropped AJP Support. Use Tomcat

Apache Tomcat ships with an optimized load balancing / reverse proxy protocol known as AJP or more formally as the Apache Tomcat Connector specification. This makes Tomcat a top choice among the many Java Servlet and JSP web app containers available. There was limited AJP support in Jetty (Eclipse Java Web Server), but this has been removed in the current Jetty 9.x rewrite. Jetty 9.0 stable was first released in March of 2013. Because of this, I recommend that multi-tier Java web app deployments consider the use of Apache Tomcat as the app container, and Apache Httpd with built-in mod_proxy_ajp as the load balancer web front end + ssl/tls layer (provides reverse proxy services using optimized AJP to your Tomcat instances).

In the past, Apache Httpd users were confused by the lack of support for mod_jk, an AJP module for httpd which usually needed to be compiled from source. Today we can forget mod_jk and use the distribution packaged and supported mod_proxy_ajp which provides the same capabilities as mod_jk without the headache of attempting to compile the mod_jk source into distribution-provided httpd software. Debian, Ubuntu, RHEL, and CentOS all provide mod_proxy_ajp packages with the essential security and bug-fix updates through apt and yum vendor-supported repositories. In order to convert from a plain HTTP-based reverse proxy configuration to the more tightly-integrated AJP, it’s as simple as changing the ProxyPass entries in your Apache Httpd configuration file(s). Here’s and example.

  • Old HTTP-based reverse proxy configuration sample
    • ProxyPass /context http://servername:8760/context
    • ProxyPassReverse /context http://servername:8760/context
  • New AJP-based reverse proxy configuration sample
    • ProxyPass /context ajp://servername:8761/context
    • ProxyPassReverse /context ajp://servername:8761/context

I understand there is an ongoing (eternal?) religious debate regarding the relative merits and disadvantages of HTTP vs AJP reverse proxy usage for Java web apps. There are two reasons I consider AJP to be a better option than plain HTTP reverse proxy solutions:

  1. When AJP is used, the Java web app receives the original headers from the front end web server as if the request was directly received from the original client. This is a HUGE ADVANTAGE! There is a little overhead required to make a reverse proxy request between front end and back-end while also passing along the original headers, but it IS TOTALLY WORTH IT because your app will be able to process the real request without elaborate workarounds attempting to configure the app to respond with front-end URL’s while ignoring the confusion introduced by reverse proxy HTTP request headers. If you use HTTP reverse proxy with Java web apps, you know what I’m talking about!
  2. AJP is proxy-aware, by this I mean that the front-end web server (load balancer / reverse proxy) and the back-end app server are both aware and actively participating in a proxy (and possibly load balancing) relationship. This enables the passing of original headers, direct monitoring of app server status, advanced native load balancing capabilities (monitored add or remove of app server nodes to the balanced set, true load-based connection assignment at front-end). These capabilities add a little overhead to the AJP protocol, but the added native load balancing and reverse proxy capabilities are definitely worth it in my opinion.

Not everyone will agree with this opinion. AJP is not an Internet Standard but rather an Apache Tomcat protocol specification that enhances the interaction between a front-end web server and any back-end Java app server(s). Even so, I feel that Java web app deployment teams and developers should Strongly consider AJP as a reverse proxy and load balancing solution of choice as long as Tomcat and Apache Httpd or other Web and App Server solutions are supporting this valuable multi-tier enhancement for better web app deployment capabilities.

Advertisements
Posted in Linux, System Administration | Tagged , , | Leave a comment

Tomcat Multiple Instances RHEL 7 CentOS 7

This is a follow-on to my earlier posts Apache Tomcat in RHEL 7 and RHEL 7 Administration Notes. This builds on the goal to use the system packaged Tomcat and Java software in order to receive security and bug fix updates from when the distribution posts those to the supported yum repositories. In this case, we’re leveraging the advanced “Multiple Tomcat Instances” capability included with Tomcat 7 as packaged by Red Hat for RHEL 7 and CentOS 7 Linux Distributions. It appears that the RHEL team has done the work to ensure that the supported package does enable multiple Tomcats if you follow a combination of the offical Apache Tomcat 7 “RUNNING” document (linked above) along with some of the comments provided with the RHEL Tomcat 7 configuration files. There is not a single place where this documentation was combined in a practical way, so here are my brief notes to help tie it all together.

  • REPLACE all occurrences of “INSTANCE” below with your true instance name. An example instance name might be “geoserver.”
  • Global configuration for ALL Tomcat Instances is provided in /etc/tomcat/tomcat.conf – any changes made here will be reflected across your entire “cat farm.”
  • Instance-Specific systemd (systemctl) Tomcat services are defined by creating new files like /usr/lib/systemd/system/tomcat@INSTANCE.service – copy the existing /usr/lib/systemd/system/tomcat@.service template file and alter for your instance. You can modify the instance Description, User and Group here.
  • Instance-Specific Environment Variables are defined by creating new files like /etc/sysconfig/tomcat@INSTANCE – copy existing /etc/sysconfig/tomcat. By default everything is commented out in this file. Use conf/server.xml changes listed below to set the ports used when a given instance of Tomcat runs.
    • To SET MAXIMUM MEMORY for specific tomcat instance, place a new line in /etc/sysconfig/tomcat@INSTANCE like the following for 8G max
    • CATALINA_OPTS="-Xmx8G"
  • Instance-Specific CATALINA_BASE directory is pre-defined to go under /var/lib/tomcats/INSTANCE – note the “Mimic” below means (use similar permissions scheme, expect similar contents when service is running).
    • Create a “conf” directory here
      • Should contain a COPY of contents of /etc/tomcat
      • CHANGE “server.xml” here to reflect instance-specific ports!!
    • Create a “logs” directory here
      • Mimic /var/log/tomcat
      • Add /etc/logrotate.d/tomcat@INSTANCE as needed
    • Create a “webapps” directory here
      • Mimic /var/lib/tomcat/webapps
      • Drop your app WAR files or app context directories here to deploy.
    • Create a “work” directory here
      • Mimic /var/cache/tomcat/work
    • Create a “temp” directory here
      • Mimic /var/cache/tomcat/temp
  • If SELinux is enabled, you may need to set the correct security context on the above instance-specific folders and files in addition to setting the instance-specific permissions based on the user & group the service runs as.
  • Manage with the standard systemctl commands like:
    • systemctl status tomcat@INSTANCE
    • systemctl enable tomcat@INSTANCE # enable start-on-boot
    • systemctl start tomcat@INSTANCE # start instance right now
    • systemctl stop tomcat@INSTANCE # stop instance right now

Good luck with your multi-instance Tomcat project on RHEL 7 or CentOS 7. By using a pattern similar to this, you should be able to take advantage of Red Hat provided security and bug-fix yum updates for your Java Servlet and JSP web apps. If you’re using the single default instance of Tomcat, see my earlier post linked at the top of this article.

Posted in Linux, System Administration | Tagged , , | Leave a comment

SharePoint API Invoke-RestMethod PowerShell

Invoke-RestMethod was introduced in PowerShell 3.0, but unfortunately there is a bug that prevents the user from setting the “Accept” header. The SharePoint 2013 REST API requires special values applied to the Accept header to return common items like list data. A typical example to retrieve contents of a list might look like:

  • $restData = Invoke-RestMethod -UseDefaultCredentials -Headers @{ "Accept" = "application/json;odata=verbose" } "http://server/site/_api/lists/getbytitle('listname')/items"

If you’re running a version of PowerShell older than 4.0, this will fail to set the Accept header and will return an error. If you attempt to retrieve the data without the header, it will be incomplete (missing actual data). PowerShell 4.0 is included with Windows 8.1 and Server 2012 R2. You can download “Windows Management Framework (WMF) 4.0” for Windows 7 and Server 2008 R2 (Win7 or 2008 R2 = Windows NT 6.1). Newer versions of PowerShell should also contain the fix.

Another nasty bug related to PowerShell handling of SharePoint JSON list data – default list properties include DUPLICATE “Id” and “ID” fields. If you attempt to convert a JSON string with these duplicates using ConvertFrom-Json, it will fail with an error. One rough workaround is to replace all the duplicate Id’s with a Non-duplicate Id. Another better workaround would be to explicitly use ODATA “$select” to choose only specific columns for SharePoint to return (eliminating the duplicate without rough string replace). The real trouble is with PowerShell JSON de-serialize implementation – it creates properties that must be case-insensitive (because ALL Identifiers in PowerShell are NOT case sensitive). Here’s an example of rough conversion of the duplicate key property (using $restData from above). Inspired by @JPBlanc answer to “ConvertFrom-Json … contains the duplicated keys.” (stackoverflow.com)

  • $json = ($restData -creplace '"Id":', '"Idx":') | ConvertFrom-Json

I won’t provide an ODATA $select example here, but there should be plenty available with a quick Google search. Another advantage to the $select approach is an improvement in performance – only sending the data you actually want to use.

Posted in System Administration | Tagged , , | Leave a comment

Subversion Windows Deduplication Bug

Windows Server 2012 introduced a valuable Data Deduplication feature to reduce physical storage needs on storage volumes where muliple copies of identical data reside. Unfortunately, the way this feature is implemented is visible to client programs in the form of “reparse points”. Older uses of reparse points included a symbolic link feature similar to that found on Unix and Linux systems. Because symbolic links may require special treatment, shared libraries like the Apache Portable Runtime (APR) detect most reparse points as symbolic links even if in reality these may just be Windows Deduplicated files (storage-optimized files). The popular file version control system “Subversion” relies on the APR library to handle files in a users working copy of project files. This APR behavior causes Subversion to treat deduplicated files on Windows as symbolic links which receive special treatment as either text files with symlink path (Unix/Linux created symlink), or as unsupported entities (Windows created symlink). In either case these don’t match the necessary behavior for Subversion to ignore the reparse point and treat the file as normal (not a symlink).

Related bugs include these issues which remain open as of 4 Jan 2017. Additional discussion of Subversion symlink behavior is posted on the Subversion FAQ.

  • NTFS Reparse Points are treated as [Unix/Linux] APR_LNK, only correct for junction/dir link, APR Bug # 47630.
    • Resolution of the APR bug might result in an acceptable fix for Subversion to permit windows-deduplication of a working copy.
  • Add support for Windows symlinks (junction points), SVN-3570 (issues.apache.org)
    • While unrelated to the deduplication problem, this bug does mention the same “Symbolic links are not supported on this platform” error that users will see if deduplication reparse points are detected in the working copy during a commit / check-in.

Until a fix is available, users need to avoid storing any working copy of subversion projects on a windows deduplicated volume. If files in a working copy become deduplicated, the resulting reparse points may lead to corruption of properties in the repository. Specifically the “svn:special” property will be set to “*” indicating the file as a symbolic link. Other users checking out or updating these marked files may receive partial (corrupted) files in working directory due to an apparent assumption by the SVN client that the file is a text representation of symbolic link – files may be truncated? Removing the special property from the affected files in repository and checking out a fresh copy should resolve the issue, but the corrupt working copy should be abandoned (delete after backing up any changed / uncommitted files). To avoid checking in a corrupt file, I recommend modifying the file properties with svnmucc or another tool like TortoiseSVN Repository Browser that can change properties directly against the repository URL without relying on a working copy. To view all file properties in a repository, use a command like “svn proplist …”

In order to resolve these problems, I have created Powershell scripts to assist in performing the following maintenance tasks:

  • detect and remove deduplication on working copies (working copy location must also be added to exclusion list in windows deduplication settings).
  • detect and remove svn:special property from files directly from repository using svnmucc

Unfortunately I don’t have time to clean up and post sample code at the moment but I hope to return in the future to add the powershell sample scripts.

Posted in System Administration | Tagged , , , , | Leave a comment

Redirect ASP.Net Default Page

Here’s a simple server-side ASP.Net default page redirect example using C# and a typical default.aspx file. This should work with most recent versions of IIS.

Key items listed for searchability: page directive, language attribute, script block, runat server, page_load, response.redirect.

Related article: Redirect Apache Tomcat Default Page.

Posted in System Administration | Tagged , , | Leave a comment

Dell PERC / MegaRAID Disk Cache Policy

The Dell PERC (PowerEdge RAID Controller) cards provide a competitive server-based fault-tolerant storage solution. NOTE that Dell often quotes a server baseline configuration without any write cache on the RAID card – this makes the baseline config appear to be an attractive low price. DO NOT buy any system without a write cache built into the RAID controller – this will be listed as battery-backed or non-volatile or flash-backed write-cache (BBWC, NVWC, FBWC). It is also important to configure your server or storage unit WITH hot-swap drives AND redundant hot-swap power supplies. You want the raid controller to be able to indicate failure of the disk with the led lights on each disk caddy – you can easily identify the failed drive and swap it out while the storage volume is online. The same applies for power supplies (PSU’s) which are a common failed component – the server needs to be able to indicate the failed PSU with led lights and server technician must be able to swap out the failed unit while the system is running. Another benefit to redundant power suppies – ability to swap out UPS units or migrate to new PDU, etc.

Back to the RAID discussion – your write cache ON-CONTROLLER is crucial to the write performance of your fault-tolerant storage system. The OS will be able to complete IO operations (IOPS) after write has completed to the raid controller cache – this allows the raid controller to continue writing to the disk while the OS returns to other non-disk tasks. If the system loses power, the flash-backed or battery-backed cache saves the unwritten data and completes the write when the power is restored to the disks.

UNFORTUNATELY there is a huge problem with MANY of the PERC / MegaRAID implementations in the field that is a BIG RISK for catastrophic DATA LOSS. The issue is with the unclear description of “Disk Write Cache” option within the Dell and other vendors RAID configuration utilities. Ironically, the “default” setting applied by many of these utilities will be incorrect introducing a risk of write corruption. The trouble is a combination of manufacturer defaults, confusing wording of the disk cache option, and lack of adequate documentation.

The PHYSICAL DISK CACHE is designed for use on consumer NON-RAID computers on cheap usually slow disks where the risk of data loss may impact only one person. In this case, an on-disk write cache (VOLATILE) speeds up performance of writes to the disk while not allowing for the cached data to be saved if power is lost. In fault-tolerant storage systems, we cannot tolerate this data loss – a RAID controller must be aware that a write operation has been successfully written to the physical disk platter (not disk-cache) before considering the write operation complete. In order to make this guarantee, any DISK CACHE must be DISABLED. This ability to prevent data loss (corruption due to power failure) is an important fault-tolerant storage capability for business information systems. When the Operating System thinks a write operation has been completed – the storage subsystem must not lose it on the way to the disk!

The big confusion is that the MegaRAID tools configure this disk cache policy under “virtual disk configuration” next to the controller “write cache” policy and the wording does not clarify whether the disk-cache is a physical-disk setting or controller-based virtual disk cache setting. To make matters worse, the MegaRAID configuration and status reports DO NOT indicate whether any given physical disk cache is currently enabled or disabled. In my opinion these are TERRIBLE UI DECISIONS on the part of the MegaRAID software utility development team! Users need to know if they’re disabling physical disk write-cache, or if they’re disabling crucial performance-improving controller-based write cache.

Moral of the story: DISABLE DISK WRITE-CACHE in your disk-cache policy under each virtual disk in your PERC / MegaRAID storage configuration. DO Enable your controller-based Write-Back cache on each virtual disk and make sure that your RAID card has healthy battery-backed or flash-backed non-volatile write cache. If your raid card has NO fault-tolerant write cache (usually listed as NO Battery for legacy reasons) – recycle it and replace it with a proper RAID card with true controller write cache. Users of your system may not be able to tolerate the poor disk performance if your controller lacks a write cache – especially with the extra write delays of parity based raid (5, 6, 50, 60, etc).

Here are some references with more authoritative sources to back the claims I make here.

  • MegaRAID disk write cache policy (ibm.com) “Disk Cache Policy should always be ‘Disabled’ when creating a Virtual Drive attached a RAID controller. This is to prevent loss of data in case of a power failure.”
  • Configuring RAID for Optimal Performance (PDF, intel.com) “[p. 6] Disk Cache Policy determines whether the hard-drive write cache is enabled or disabled. When Disk Cache Policy is enabled, there is a risk of losing data in the hard drive cache if a
    power failure occurs. The data loss may be fatal and may require restoring the data from a backup device. It is critical to have protection against power failures.” – yes, there should be no option other than *disabled* IMHO

Resources from Dell tend to just add to the confusion – perhaps this is because they’re just re-branding the AMI/LSI MegaRAID technology as “PERC” and do not quite understand it themselves? Here are a couple examples of the confusion from Dell and their customers (including myself when I read their documentation and forums on this topic).

Hopefully this blog post will help clarify this confusing topic and result in more properly configured RAID storage solutions based on the AMI / LSI MegaRAID cards. A BIG THANKS to the Intel and IBM teams who posted helpful documentation with an answer to this common Dell PERC RAID question. Feel free to leave a comment if this helped you out or if you have something to add.

Posted in System Administration | Tagged , , , , | 2 Comments

Mount CIFS Share on RHEL/CentOS

Some quick notes regarding mounting CIFS shares on RHEL and CentOS. Note that system-wide network filesystem mounts are typically specified in /etc/fstab and require supported kernel modules for compatible vfs filesystem types. In the case of SMB filesystems, the modern Linux kernel module is referred to as “cifs.”

For RHEL 5.x / CentOS 5.x – here are some hints

  • Uninstall the default samba packages (3.0)
  • Install the samba3x packages (3.6) – we need the “samba3x-client” for cifs mount
  • You may need to specify the security type as a mount option – some bugs can prevent mount.cifs from negotiating compatible session authentication / security. Example may be “sec=ntlmv2
  • In /etc/fstab, add the option “_netdev” to allow the filesystem to mount during boot. Other local filesystems are mounted *before* the network becomes available (/etc/rc.d/rc.sysinit). _netdev lets the system know to wait until *after* the network comes online (/etc/init.d/netfs) before attempting to mount your Windows file share (smb filesystem).
  • Review Microsoft KB # 957441 to see if you may need to enable “AllowLegacySrvCall” on your Windows file server. Linked below under references.
  • If you’re specifying login credentials, you may need to use the short forms: user, pass, dom, or cred. If you’re using a credential file, use the short forms of user, pass, dom there too. The documentation is confusing on this – it may not work properly without the *short* forms of these options in either cred file or fstab.

For RHEL 6.x / CentOS 6.x

  • Install cifs-utils (I think it’s still version 3.6 like we use on 5.x distro)
  • Negotiation of correct session auth & security may work better due to newer kernel modules – YMMV.
  • Same issue with the credentials options as 5.x distro – use the short forms!
  • Windows server might need AllowLegacySrvCall fix – try without it first but if things continue to fail apply legacy setting to registry on Win file server.

For RHEL 7.x / CentOS 7.x

  • I still need to work on this in the lab
  • Try after joining Active Directory Domain with “realm join …”
  • Try after installing sssd-libwbclient
  • Hoping to use sssd joined to domain and something like user=SERVERNAME$,sec=krb5,multiuser options to automatically use machine credentials for kerberos mount session. Desired functionality is each domain user receiving appropriate privileges based on multiuser mapping from sssd.
  • Documentation is difficult to find for this scenario. It’s not clear if the system will automatically allow use of machine domain credentials (krb5) on boot for the fstab mount.
  • How SSSD Integrates with an Active Directory Environment (redhat.com)
  • Samba mounting question (gmane.org linux.kernel.cifs forum)
  • Connecting Linux machine to windows AD and mounting remote … dirs (Martin’s Chronicles blog)

References:

Posted in Linux, System Administration | Tagged , , , | Leave a comment

Firefox Trusted Certificate Authorities (Windows Crypto API)

Windows versions of the Firefox browser have an independent certificate trust store and do not integrate with the native system cert trust infrastructure. This limitation applies to all Firefox versions up to and including 48.

Good news in 2016, software developers at Mozilla are putting the finishing touches on Firefox 49 which should have the issue resolved to integrate with the native Windows Certificate trust store. This will help Firefox compete with Internet Explorer, Edge, and Chrome browsers which already have support for the native Windows cert trust stores. See the following Mozilla bug tracker entry for details:

Organizations using Windows / Active Directory / Group Policy will automatically benefit from this new functionality after migrating to Firefox 49 (once a stable release is available to the public). Firefox will finally be able to trust the same set of certificates as other native Windows programs. I’m hoping this update will also allow the use of user identity certificates from the Windows certificate store? Version 49 is currently scheduled for public availability on 2016-09-13.

Posted in System Administration | Tagged , | Leave a comment

Perl CPAN Modules Offline Windows Install

Supporting some Perl users recently with the need to install Perl CPAN modules (like “Tk” GUI Tool Kit) on Windows without direct access for perl cpan command to connect to any CPAN mirror servers (firewall or other connectivity issue).

In this case, it is still possible to install CPAN modules. Like Perl itself, “there is more than one way to do it” – this is one such way ;-).

  • Assuming the Windows system already has “Strawberry Perl” distribution (contains gcc, perl, ptar, dmake, and other dependencies to build and use CPAN modules).
  • Download the *.tar.gz package from CPAN and transfer to your target system (may require sneaker-net like burning a cd/dvd if lacking Internet connectivity).
  • Extract the bundle using ptar -zxf PKG-NAME.tar.gz or similar command within Strawberry Perl
  • cd into the extracted package directory
  • Build the package using standard commands – usually described in INSTALL file. Example typical for “Tk” module follows.
    • perl Makefile.PL
    • dmake
    • dmake test
    • dmake install
    • An optional test program will also be available after successful build with widget command.

Good luck with your offline Perl CPAN module Windows build and install tasks. Hopefully this quick Strawberry Perl note will prove useful to those without direct connectivity to CPAN. Another possible solution may be to provide a “Mini-CPAN” for local module distribution as recommended by @mfontani on superuser.com: How do I install … [perl] module (.tar.gz file) in Windows?

Posted in System Administration | Tagged , , , | Leave a comment

Convert Windows IIS SSL Certificate to Tomcat Java Format

This is a follow-on to my earlier post Convert Apache Httpd SSL Certificate for Tomcat. This time around we’re converting a GoDaddy SSL server certificate that has already been issued and currently in-use with Windows IIS web server. The most important thing about this conversion is to ensure that the certificate key-pair entry in the resulting keystore file for Tomcat has the appropriate intermediate CA trust-chain stored under the same entry. Without this trust chain in the right place, Tomcat will fail to send the intermediate CA certs to SSL/TLS clients during the establishment of a secure session. For many clients the absense of intermediate CA will not be a problem because the client trust store already has the same intermediate CA on record in the local trust store. Unfortunately some popular mobile clients – most notably iOS (iPhone / iPad) have a stripped-down trust list that leaves out most if not all intermediate CA certs – thus the requirement that the server (Tomcat) present the appropriate intermediate certs with the server cert to avoid TLS/SSL trust errors when the client connects. These notes attempt to describe a repeatable process to reliably convert the Windows IIS server cert to Tomcat-compatible keystore. I will also include example commands to verify that the Tomcat keystore functions as required for clients that depend on intermediate certs presented in a trust-chain by the server (Tomcat).

The first step is through the Windows Certificates MMC snap-in, exporting the server certificate/key-pair with attached trust chain.

  • Windows Key + R (run), then type “mmc” in the “Open:” box and click “OK”
  • Ctrl + M (add-remove snap-in)
  • Double-click “Certificates” in the “Available snap-ins list
  • Select “Computer Account” radio button, then “Next”
  • Select “Local computer” radio button, then “Finish”
  • Click “OK” to return to the MMC window with the newly-added “Certificates” snap-in
  • Expand “Certificates” then “Personal” nodes
  • Right-click the server certificate you want to convert, then “All Tasks” – “Export”
  • Step through the wizard, Select “Yes, export the private key” (MANDATORY TO EXPORT KEY-PAIR)
  • Select “Include all certificates in the certification path” (MANDATORY TO EXPORT TRUST CHAIN)
    • NEVER EVER choose the delete-private-key option, THAT WOULD DESTROY THE CERTIFICATE IN Windows/IIS
  • Select “Export all extended properties”
  • Type the SAME PASSWORD you plan to use for your Tomcat Keystore (helps avoid keystore/private-key password mismatch problems with Tomcat)
  • BEFORE clicking “Finish” on the Cert Export Wizard, REVIEW THE SETTINGS SELECTED (export keys = yes, include all certificates in the certification path = yes). MAKE EXTRA SURE that you’re not accidentally deleting the private key from the Windows Certificate Store.

After you have the cert saved as a *.pfx (PKCS12) file, the Java keytool can handle the rest of the conversion process.

# USING PowerShell to run example commands (provides Select-String and other useful utilities)

keytool -list -v -keystore YOUR-CERT.pfx -storetype PKCS12 | select-string "Keystore |Alias |Entry |chain |Owner: |Issuer: |\]:"

# REVIEW OUTPUT, find "Alias name" for the cert you exported
# DOWNLOAD A COPY of root/intermediate certs corresponding to your server cert from https://certs.godaddy.com/repository/
# FILES are "gdroot-g2.crt" and "gdig2.crt" for GoDaddy G2 (generation 2) certs

keytool -import -alias gdroot-g2 -keystore TOMCAT-KEYSTORE.jks -trustcacerts -file gdroot-g2.crt
keytool -import -alias gdig2 -keystore TOMCAT-KEYSTORE.jks -trustcacerts -file gdig2.crt

# NOTE these root and intermediate certs are NOT the mandatory cert chain. They will be available to Tomcat/Java if needed.
keytool -importkeystore -srckeystore YOUR-CERT.pfx -srcstoretype PKCS12 -destkeystore TOMCAT-KEYSTORE.jks -srcalias ALIAS-FROM-KEYTOOL-LIST -destalias FRIENDLY-CERT-NAME
# This is the most important step for the conversion. The cert with matching ALIAS from YOUR-CERT.pfx is imported into your JKS file
# With the same command we're also renaming the random alias from Windows to a meaningful short alias of your choice FRIENDLY-CERT-NAME
# Verify the contents of your new Tomcat-compatible JKS file using a command like:

keytool -list -v -keystore TOMCAT-KEYSTORE.jks | select-string "Keystore |Alias |Entry |chain |Owner: |Issuer: |\]:"

# For GoDaddy G2, your "PrivateKeyEntry" should have a "chain length" of 3: Certs [1] server, [2] intermediate, and [3] root

After creating your new JKS file, you must configure Tomcat to use it (server.xml) and then RE-START the Tomcat service. You can verify that the chain presented by Tomcat is valid by connecting to your site with a browser that requires a server-provided trust-chain (most browsers on iOS mobile devices). Alternately, you can use an OpenSSL command to view the trust chain presented by the server – example command follows.

# bash shell is assumed for this example. Windows users can obtain bash with a Cygwin environment.
# Substitute your server full-qualified name and Tomcat SSL/TLS port number

echo "Q" | openssl s_client -connect YOUR-TOMCAT.YOUR-DOMAIN.COM:8443 | egrep 'chain|s:|i:|Verify return'

# SAMPLE OUTPUT if your new JKS file passes the trust-chain compatibility test
Certificate chain
 0 s:/OU=Domain Control Validated/CN=YOUR-TOMCAT.YOUR-DOMAIN.COM
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
 1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
 2 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
    Verify return code: 0 (ok)
# NOTE that the chain received from Tomcat is displayed along with the chain-verification status "0 (ok)" means success

Hopefully these brief notes will help one or two people converting and testing IIS/Windows server certificates for use with the Apache Tomcat / Java JKS format keystore. Good luck with your IIS to Tomcat cert projects!

Posted in System Administration | Leave a comment