Redirect ASP.Net Default Page

Here’s a simple server-side ASP.Net default page redirect example using C# and a typical default.aspx file. This should work with most recent versions of IIS.

Key items listed for searchability: page directive, language attribute, script block, runat server, page_load, response.redirect.

Related article: Redirect Apache Tomcat Default Page.

Posted in System Administration | Tagged , , | Leave a comment

Dell PERC / MegaRAID Disk Cache Policy

The Dell PERC (PowerEdge RAID Controller) cards provide a competitive server-based fault-tolerant storage solution. NOTE that Dell often quotes a server baseline configuration without any write cache on the RAID card – this makes the baseline config appear to be an attractive low price. DO NOT buy any system without a write cache built into the RAID controller – this will be listed as battery-backed or non-volatile or flash-backed write-cache (BBWC, NVWC, FBWC). It is also important to configure your server or storage unit WITH hot-swap drives AND redundant hot-swap power supplies. You want the raid controller to be able to indicate failure of the disk with the led lights on each disk caddy – you can easily identify the failed drive and swap it out while the storage volume is online. The same applies for power supplies (PSU’s) which are a common failed component – the server needs to be able to indicate the failed PSU with led lights and server technician must be able to swap out the failed unit while the system is running. Another benefit to redundant power suppies – ability to swap out UPS units or migrate to new PDU, etc.

Back to the RAID discussion – your write cache ON-CONTROLLER is crucial to the write performance of your fault-tolerant storage system. The OS will be able to complete IO operations (IOPS) after write has completed to the raid controller cache – this allows the raid controller to continue writing to the disk while the OS returns to other non-disk tasks. If the system loses power, the flash-backed or battery-backed cache saves the unwritten data and completes the write when the power is restored to the disks.

UNFORTUNATELY there is a huge problem with MANY of the PERC / MegaRAID implementations in the field that is a BIG RISK for catastrophic DATA LOSS. The issue is with the unclear description of “Disk Write Cache” option within the Dell and other vendors RAID configuration utilities. Ironically, the “default” setting applied by many of these utilities will be incorrect introducing a risk of write corruption. The trouble is a combination of manufacturer defaults, confusing wording of the disk cache option, and lack of adequate documentation.

The PHYSICAL DISK CACHE is designed for use on consumer NON-RAID computers on cheap usually slow disks where the risk of data loss may impact only one person. In this case, an on-disk write cache (VOLATILE) speeds up performance of writes to the disk while not allowing for the cached data to be saved if power is lost. In fault-tolerant storage systems, we cannot tolerate this data loss – a RAID controller must be aware that a write operation has been successfully written to the physical disk platter (not disk-cache) before considering the write operation complete. In order to make this guarantee, any DISK CACHE must be DISABLED. This ability to prevent data loss (corruption due to power failure) is an important fault-tolerant storage capability for business information systems. When the Operating System thinks a write operation has been completed – the storage subsystem must not lose it on the way to the disk!

The big confusion is that the MegaRAID tools configure this disk cache policy under “virtual disk configuration” next to the controller “write cache” policy and the wording does not clarify whether the disk-cache is a physical-disk setting or controller-based virtual disk cache setting. To make matters worse, the MegaRAID configuration and status reports DO NOT indicate whether any given physical disk cache is currently enabled or disabled. In my opinion these are TERRIBLE UI DECISIONS on the part of the MegaRAID software utility development team! Users need to know if they’re disabling physical disk write-cache, or if they’re disabling crucial performance-improving controller-based write cache.

Moral of the story: DISABLE DISK WRITE-CACHE in your disk-cache policy under each virtual disk in your PERC / MegaRAID storage configuration. DO Enable your controller-based Write-Back cache on each virtual disk and make sure that your RAID card has healthy battery-backed or flash-backed non-volatile write cache. If your raid card has NO fault-tolerant write cache (usually listed as NO Battery for legacy reasons) – recycle it and replace it with a proper RAID card with true controller write cache. Users of your system may not be able to tolerate the poor disk performance if your controller lacks a write cache – especially with the extra write delays of parity based raid (5, 6, 50, 60, etc).

Here are some references with more authoritative sources to back the claims I make here.

  • MegaRAID disk write cache policy ( “Disk Cache Policy should always be ‘Disabled’ when creating a Virtual Drive attached a RAID controller. This is to prevent loss of data in case of a power failure.”
  • Configuring RAID for Optimal Performance¬†(PDF, “[p. 6] Disk Cache Policy determines whether the hard-drive write cache is enabled or disabled. When Disk Cache Policy is enabled, there is a risk of losing data in the hard drive cache if a
    power failure occurs. The data loss may be fatal and may require restoring the data from a backup device. It is critical to have protection against power failures.” – yes, there should be no option other than *disabled* IMHO

Resources from Dell tend to just add to the confusion – perhaps this is because they’re just re-branding the AMI/LSI MegaRAID technology as “PERC” and do not quite understand it themselves? Here are a couple examples of the confusion from Dell and their customers (including myself when I read their documentation and forums on this topic).

Hopefully this blog post will help clarify this confusing topic and result in more properly configured RAID storage solutions based on the AMI / LSI MegaRAID cards. A BIG THANKS to the Intel and IBM teams who posted helpful documentation with an answer to this common Dell PERC RAID question. Feel free to leave a comment if this helped you out or if you have something to add.

Posted in System Administration | Tagged , , , , | 5 Comments

Mount CIFS Share on RHEL/CentOS

Some quick notes regarding mounting CIFS shares on RHEL and CentOS. Note that system-wide network filesystem mounts are typically specified in /etc/fstab and require supported kernel modules for compatible vfs filesystem types. In the case of SMB filesystems, the modern Linux kernel module is referred to as “cifs.”

For RHEL 5.x / CentOS 5.x – here are some hints

  • Uninstall the default samba packages (3.0)
  • Install the samba3x packages (3.6) – we need the “samba3x-client” for cifs mount
  • You may need to specify the security type as a mount option – some bugs can prevent mount.cifs from negotiating compatible session authentication / security. Example may be “sec=ntlmv2
  • In /etc/fstab, add the option “_netdev” to allow the filesystem to mount during boot. Other local filesystems are mounted *before* the network becomes available (/etc/rc.d/rc.sysinit). _netdev lets the system know to wait until *after* the network comes online (/etc/init.d/netfs) before attempting to mount your Windows file share (smb filesystem).
  • Review Microsoft KB # 957441 to see if you may need to enable “AllowLegacySrvCall” on your Windows file server. Linked below under references.
  • If you’re specifying login credentials, you may need to use the short forms: user, pass, dom, or cred. If you’re using a credential file, use the short forms of user, pass, dom there too. The documentation is confusing on this – it may not work properly without the *short* forms of these options in either cred file or fstab.

For RHEL 6.x / CentOS 6.x

  • Install cifs-utils (I think it’s still version 3.6 like we use on 5.x distro)
  • Negotiation of correct session auth & security may work better due to newer kernel modules – YMMV.
  • Same issue with the credentials options as 5.x distro – use the short forms!
  • Windows server might need AllowLegacySrvCall fix – try without it first but if things continue to fail apply legacy setting to registry on Win file server.

For RHEL 7.x / CentOS 7.x

  • I still need to work on this in the lab
  • Try after joining Active Directory Domain with “realm join …”
  • Try after installing sssd-libwbclient
  • Hoping to use sssd joined to domain and something like user=SERVERNAME$,sec=krb5,multiuser options to automatically use machine credentials for kerberos mount session. Desired functionality is each domain user receiving appropriate privileges based on multiuser mapping from sssd.
  • Documentation is difficult to find for this scenario. It’s not clear if the system will automatically allow use of machine domain credentials (krb5) on boot for the fstab mount.
  • How SSSD Integrates with an Active Directory Environment (
  • Samba mounting question ( linux.kernel.cifs forum)
  • Connecting Linux machine to windows AD and mounting remote … dirs (Martin’s Chronicles blog)


Posted in Linux, System Administration | Tagged , , , | Leave a comment

Firefox Trusted Certificate Authorities (Windows Crypto API)

Windows versions of the Firefox browser have an independent certificate trust store and do not integrate with the native system cert trust infrastructure. This limitation applies to all Firefox versions up to and including 48.

Good news in 2016, software developers at Mozilla are putting the finishing touches on Firefox 49 which should have the issue resolved to integrate with the native Windows Certificate trust store. This will help Firefox compete with Internet Explorer, Edge, and Chrome browsers which already have support for the native Windows cert trust stores. See the following Mozilla bug tracker entry for details:

Organizations using Windows / Active Directory / Group Policy will automatically benefit from this new functionality after migrating to Firefox 49 (once a stable release is available to the public). Firefox will finally be able to trust the same set of certificates as other native Windows programs. I’m hoping this update will also allow the use of user identity certificates from the Windows certificate store? Version 49 is currently scheduled for public availability on 2016-09-13.

Posted in System Administration | Tagged , | Leave a comment

Perl CPAN Modules Offline Windows Install

UPDATE 16 Mar 2018, see comment by @Wes-Peacock. Strawberry Perl as of version¬† has replaced dmake with gmake. I’m making the change below to reflect gmake.

Supporting some Perl users recently with the need to install Perl CPAN modules (like “Tk” GUI Tool Kit) on Windows without direct access for perl cpan command to connect to any CPAN mirror servers (firewall or other connectivity issue).

In this case, it is still possible to install CPAN modules. Like Perl itself, “there is more than one way to do it” – this is one such way ;-).

  • Assuming the Windows system already has “Strawberry Perl” distribution (contains gcc, perl, ptar, gmake, and other dependencies to build and use CPAN modules).
  • Download the *.tar.gz package from CPAN and transfer to your target system (may require sneaker-net like burning a cd/dvd if lacking Internet connectivity).
  • Extract the bundle using ptar -zxf PKG-NAME.tar.gz or similar command within Strawberry Perl
  • cd into the extracted package directory
  • Build the package using standard commands – usually described in INSTALL file. Example typical for “Tk” module follows.
  • perl Makefile.PL
  • gmake
  • gmake test
  • gmake install
  • An optional test program will also be available after successful build with widget command.

Good luck with your offline Perl CPAN module Windows build and install tasks. Hopefully this quick Strawberry Perl note will prove useful to those without direct connectivity to CPAN. Another possible solution may be to provide a “Mini-CPAN” for local module distribution as recommended by @mfontani on How do I install … [perl] module (.tar.gz file) in Windows?

Posted in System Administration | Tagged , , , | 1 Comment

Convert Windows IIS SSL Certificate to Tomcat Java Format

This is a follow-on to my earlier post Convert Apache Httpd SSL Certificate for Tomcat. This time around we’re converting a GoDaddy SSL server certificate that has already been issued and currently in-use with Windows IIS web server. The most important thing about this conversion is to ensure that the certificate key-pair entry in the resulting keystore file for Tomcat has the appropriate intermediate CA trust-chain stored under the same entry. Without this trust chain in the right place, Tomcat will fail to send the intermediate CA certs to SSL/TLS clients during the establishment of a secure session. For many clients the absense of intermediate CA will not be a problem because the client trust store already has the same intermediate CA on record in the local trust store. Unfortunately some popular mobile clients – most notably iOS (iPhone / iPad) have a stripped-down trust list that leaves out most if not all intermediate CA certs – thus the requirement that the server (Tomcat) present the appropriate intermediate certs with the server cert to avoid TLS/SSL trust errors when the client connects. These notes attempt to describe a repeatable process to reliably convert the Windows IIS server cert to Tomcat-compatible keystore. I will also include example commands to verify that the Tomcat keystore functions as required for clients that depend on intermediate certs presented in a trust-chain by the server (Tomcat).

The first step is through the Windows Certificates MMC snap-in, exporting the server certificate/key-pair with attached trust chain.

  • Windows Key + R (run), then type “mmc” in the “Open:” box and click “OK”
  • Ctrl + M (add-remove snap-in)
  • Double-click “Certificates” in the “Available snap-ins list
  • Select “Computer Account” radio button, then “Next”
  • Select “Local computer” radio button, then “Finish”
  • Click “OK” to return to the MMC window with the newly-added “Certificates” snap-in
  • Expand “Certificates” then “Personal” nodes
  • Right-click the server certificate you want to convert, then “All Tasks” – “Export”
  • Step through the wizard, Select “Yes, export the private key” (MANDATORY TO EXPORT KEY-PAIR)
  • Select “Include all certificates in the certification path” (MANDATORY TO EXPORT TRUST CHAIN)
    • NEVER EVER choose the delete-private-key option, THAT WOULD DESTROY THE CERTIFICATE IN Windows/IIS
  • Select “Export all extended properties”
  • Type the SAME PASSWORD you plan to use for your Tomcat Keystore (helps avoid keystore/private-key password mismatch problems with Tomcat)
  • BEFORE clicking “Finish” on the Cert Export Wizard, REVIEW THE SETTINGS SELECTED (export keys = yes, include all certificates in the certification path = yes). MAKE EXTRA SURE that you’re not accidentally deleting the private key from the Windows Certificate Store.

After you have the cert saved as a *.pfx (PKCS12) file, the Java keytool can handle the rest of the conversion process.

# USING PowerShell to run example commands (provides Select-String and other useful utilities)

keytool -list -v -keystore YOUR-CERT.pfx -storetype PKCS12 | select-string "Keystore |Alias |Entry |chain |Owner: |Issuer: |\]:"

# REVIEW OUTPUT, find "Alias name" for the cert you exported
# DOWNLOAD A COPY of root/intermediate certs corresponding to your server cert from
# FILES are "gdroot-g2.crt" and "gdig2.crt" for GoDaddy G2 (generation 2) certs

keytool -import -alias gdroot-g2 -keystore TOMCAT-KEYSTORE.jks -trustcacerts -file gdroot-g2.crt
keytool -import -alias gdig2 -keystore TOMCAT-KEYSTORE.jks -trustcacerts -file gdig2.crt

# NOTE these root and intermediate certs are NOT the mandatory cert chain. They will be available to Tomcat/Java if needed.
keytool -importkeystore -srckeystore YOUR-CERT.pfx -srcstoretype PKCS12 -destkeystore TOMCAT-KEYSTORE.jks -srcalias ALIAS-FROM-KEYTOOL-LIST -destalias FRIENDLY-CERT-NAME
# This is the most important step for the conversion. The cert with matching ALIAS from YOUR-CERT.pfx is imported into your JKS file
# With the same command we're also renaming the random alias from Windows to a meaningful short alias of your choice FRIENDLY-CERT-NAME
# Verify the contents of your new Tomcat-compatible JKS file using a command like:

keytool -list -v -keystore TOMCAT-KEYSTORE.jks | select-string "Keystore |Alias |Entry |chain |Owner: |Issuer: |\]:"

# For GoDaddy G2, your "PrivateKeyEntry" should have a "chain length" of 3: Certs [1] server, [2] intermediate, and [3] root

After creating your new JKS file, you must configure Tomcat to use it (server.xml) and then RE-START the Tomcat service. You can verify that the chain presented by Tomcat is valid by connecting to your site with a browser that requires a server-provided trust-chain (most browsers on iOS mobile devices). Alternately, you can use an OpenSSL command to view the trust chain presented by the server – example command follows.

# bash shell is assumed for this example. Windows users can obtain bash with a Cygwin environment.
# Substitute your server full-qualified name and Tomcat SSL/TLS port number

echo "Q" | openssl s_client -connect YOUR-TOMCAT.YOUR-DOMAIN.COM:8443 | egrep 'chain|s:|i:|Verify return'

# SAMPLE OUTPUT if your new JKS file passes the trust-chain compatibility test
Certificate chain
 0 s:/OU=Domain Control Validated/CN=YOUR-TOMCAT.YOUR-DOMAIN.COM
   i:/C=US/ST=Arizona/L=Scottsdale/, Inc./OU= Daddy Secure Certificate Authority - G2
 1 s:/C=US/ST=Arizona/L=Scottsdale/, Inc./OU= Daddy Secure Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/, Inc./CN=Go Daddy Root Certificate Authority - G2
 2 s:/C=US/ST=Arizona/L=Scottsdale/, Inc./CN=Go Daddy Root Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/, Inc./CN=Go Daddy Root Certificate Authority - G2
    Verify return code: 0 (ok)
# NOTE that the chain received from Tomcat is displayed along with the chain-verification status "0 (ok)" means success

Hopefully these brief notes will help one or two people converting and testing IIS/Windows server certificates for use with the Apache Tomcat / Java JKS format keystore. Good luck with your IIS to Tomcat cert projects!

Posted in System Administration | Leave a comment

Grep for Windows PowerShell

I recently needed to search a file in Windows for matching lines of text and did some looking around for a built-in tool to accomplish the task. I had some unique requirements that led me to a useful solution with the PowerShell Select-String command (simple grep-like tool). Here are some of the requirements I was looking for:

  • Built-in to Windows, no software to install
  • Capable of searching UCS-2 (UTF-16) multi-byte character unicode text files.
  • Search for alternative patterns (vertical bar “|” operator “OR”). Find lines that match one of a set of multiple patterns (alternation).

The specific case was for reviewing output from the Java SE (JRE/JDK) “keytool -list -v” command – verifying the contents of Java Key Stores (JKS). Here’s a sample to demonstrate

$pattern = "Keystore |Owner: |Alias name: |Entry type: |Issuer: |chain length: |Certificate\[|\*\*\*"
select-string $pattern KEYTOOL_LIST.txt | select line
# sample output for a GoDaddy certificate with trust chain used by Apache Tomcat keystore
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 3 entries
Alias name: gdroot-g2
Entry type: trustedCertEntry
Owner: CN=Go Daddy Root Certificate Authority - G2, O=", Inc.", L=Scottsdale, ST=Arizona, C=US
Issuer: CN=Go Daddy Root Certificate Authority - G2, O=", Inc.", L=Scottsdale, ST=Arizona, C=US
Alias name: TOMCAT-SRV
Entry type: PrivateKeyEntry
Certificate chain length: 3
Owner:, OU=Domain Control Validated
Issuer: CN=Go Daddy Secure Certificate Authority - G2, OU=, O=", Inc....
Owner: CN=Go Daddy Secure Certificate Authority - G2, OU=, O=", Inc."...
Issuer: CN=Go Daddy Root Certificate Authority - G2, O=", Inc.", L=Scottsdale, ST=Arizona, C=US
Owner: CN=Go Daddy Root Certificate Authority - G2, O=", Inc.", L=Scottsdale, ST=Arizona, C=US
Issuer: CN=Go Daddy Root Certificate Authority - G2, O=", Inc.", L=Scottsdale, ST=Arizona, C=US
Alias name: gdig2
Entry type: trustedCertEntry
Owner: CN=Go Daddy Secure Certificate Authority - G2, OU=, O=", Inc."...
Issuer: CN=Go Daddy Root Certificate Authority - G2, O=", Inc.", L=Scottsdale, ST=Arizona, C=US

In this case, we were able to summarize the long complex keytool output to show only critical lines of interest – now we have validated that the server certificate/key-pair has the associated trust chain attached to the appropriate server cert-key entry. Tomcat requires this in order to pass the trust chain information to the client – iOS browsers are notorious for generating certificate errors when the web server fails to send the intermediate CA trust chain as part of establishing an SSL/TLS secure connection.

You may also be interested in my earlier post from October 2015 PowerShell for OpenSSL CA Issued Cert Status. In that post, Select-String was used as part of a simple script to view a list of private CA-issued certificates.

Posted in System Administration | Tagged , , | Leave a comment

Java SE 7 vs 8 TLS SSL Cipher Support

Recent news from Oracle indicates that free public support (bug fixes and security updates) will currently only be provided for Java SE 8.x (as of May 2016). This is a smart move for Oracle for many reasons – one being an attempt to force users and developers to migrate away from old vulnerable versions of Java (the java plugin is a top malware target). Unfortunately, Oracle is still not providing a free capability to automatically update Java client installs to the latest security fix release.

Another huge reason to migrate ALL servers and client systems to the latest Java 8.x release – TLS and SSL cipher support compatibility. Recent industry migrations to new cipher suites and newer TLS versions are increasing security for Internet communications, but this is causing difficult-to-troubleshoot compatibility issues for Java 7.x and older which don’t ship with default support for the newer server ciphers and may leave newer TLS versions disabled.

Moral of the story … upgrade ALL your server and client Java installs to Java SE 8.x – the sooner the better. If you’re having trouble connecting with a Java client or server program, double-check which version of Java is being used – all clients and servers should be running 8.x for your best chances of security compliance and cipher/TLS compatibility.

For some technical details on cipher and TLS default support with each Java SE version along with troubleshooting tips, see the following Oracle article: Diagnosing TLS, SSL, and HTTPS. Good luck with your Java SE TLS/SSL tasks!

Posted in System Administration | Tagged , , , | Leave a comment

Windows Server Update Services Approvals

I’m writing to communicate a STRONG OPINION I have regarding a COMMON ERROR I see often with companies using Windows Server Update Services (WSUS). This is based on the version of WSUS provided with Server 2012 R2, but similar principals should apply to WSUS on other Windows Server releases.

With a fresh install of WSUS, a Default Approval Rule will be present that will AUTO-APPROVE ONLY CRITICAL and SECURITY updates for ALL COMPUTERS in the organization. I recommend that with this approval rule enabled (checked), the MOST IMPORTANT UPDATES will be auto-approved and installed on company computers which report to your WSUS server. With this conservative and automatic approval setting, you are trying to ensure that all computers received crucial bug-fix or security-vulnerability patches. Occasionally an update approved at this level may cause a system problem – if this risk is a concern, you may want to schedule different groups of WSUS clients to receive updates on different schedules so that you can have some systems operational and disapprove the trouble updates while determining what went wrong and fixing the affected clients.

The trouble I see is that WSUS management tool reports “NEEDED” updates which really just means – An update published by Microsoft that applies to software installed on one of your clients. These updates are not actually NEEDED unless you decide to install them. Since the default rule will only approve CRITICAL and SECURITY updates, there will be a large list of Needed updates reported in your WSUS server console. When an admin logs into WSUS, they naturally want to Resolve the problem and Approve the Needed updates. This is WRONG, DO NOT APPROVE these “Needed” updates. All updates that are not classified as Critical or Security have a MUCH HIGHER RISK of breaking your client systems as they are not providing a critical bug fix or security vulnerability patch. I recommend that WSUS administrators leave these less urgent update categories unapproved unless a specific update is released that addresses a known issue with software in use on your Update Services client PC’s.

I can’t stress enough that this more conservative WSUS update approval process will HELP YOU AVOID APPLYING A “BAD UPDATE” to your clients. In WSUS the term “Needed” might be better interpreted as – current release update that applies to software installed on your WSUS Client system(s) (updates of any classification).

I’m sure there are many opinions about this topic as it is a matter or administrative preference or policy for each WSUS network. This blog post is just a quick note of WARNING – if you apply all “WSUS Needed” updates, you are increasing the risk of applying some “bad updates” to your client systems.

Posted in System Administration | Tagged , | Leave a comment

FlexNet Licensing flexlm

FlexNet Licensing (flexlm) is an extremely popular commercial software license management system. It is used for network licensing with Matlab, ENVI/IDL, RemoteView, and a ton of other popular software products. This is a short note with tips for success hosting a flexlm license server on Windows.

  • Follow the software manufacturer instructions to install supported version of flexlm license manager for your software product. You should be installing a license server distributed by the same company that makes the client server which uses the hosted licenses. You would NEVER want to attempt to host licenses for different products/manufacturers within the same flexlm instance. For the best chances of success, I would recommend installing each different product’s supported flexlm on a separate operating system instance (different physical or virtual machines).
  • Each instance of flexlm provided by a software company will listen on a pre-configured TCP port. This port will be visible from the lmtools utility when you query the license server status.
  • Your software manufacturer is required to provide a “Vendor Daemon” that manages license features for the flexlm server. This vendor daemon will often require direct TCP connections from the client software. In order to assign a predictable TCP port for the vendor daemon, add PORT=#### (substitute a real port number) to the end of the VENDOR line in each license file that flexlm is hosting. EVERY time you upgrade to a new license file, you will need to REPEAT this step and add the vendor port assignment to the new license file. Stop and re-start the flexlm service to activate the vendor daemon on the correct port
  • If you’re running Windows Firewall (recommended), you will need to ensure that BOTH the flexlm port AND the vendor daemon port are open.
  • The jumbo packet issue with Sentinel RMS (Socet GXP) will not be an issue with flexlm because TCP negotiates a maximum packet size between client and server when establishing a connection. The fact that flexlm uses TCP makes it a better license server product in my opinion. Unfortunately software vendors are not usually interested in switching to a different license management platform once they learn how to use an inferior one.

As an example, let’s consider RemoteView. After installing the flexlm license manager provided by RemoteView, we would place the vendor-provided license file in the flexlm license directory and use lmtools to make sure that the flexlm service is configured to use the appropriate license file. Then we would edit the license file to assign a static port to the vendor daemon VENDOR overwatc PORT=27001. After stopping and re-starting flexlm using lmtools, flexlm should be listening on TCP 27000 and vendor daemon “overwatc” on TCP 27001. Use the Windows Firewall configuration tools to open up ports TCP 27000-27001 which will allow your RemoteView clients to connect.

For your client machines, I recommend that you make client-specific license files based on your flexlm server license file. To do this, copy the server license file and then rename to something like RemoteViewClient.lic (substitute your licensed product name). Edit the file and REMOVE ALL LINES EXCEPT the line beginning with SERVER. Immediately after the SERVER line, place a new line with USE_SERVER as the only text. The client license file will look something like the following.

SERVER yourServerName 0123456789AB 

On each client system, use the software vendor instructions to install the newly created client license file. This client license will never need any changes unless the server name or server mac address/hostid changes. When you update your server license file, the changes will be automatically available to clients (after restart of flexlm and client software).

Posted in System Administration | Tagged , , | Leave a comment