Over the past 10 or so years, the number of [AIX, HPUX, Solaris] to Linux migrations have been increasing steadily, along with the “safe” recommended maximum number of concurrent users. The reason is simple, of course: cost. Unless your OpenEdge application is serving thousands of users, Linux can probably do the job just as well as one of the proprietary Unix flavours. If you have less than 250 concurrent users then this should be a no-brainer for 98% of you.
Here are some points to consider for the OpenEdge migration:
- You need to dump and load. Databases and backups are not cross-platform compatible. I’ll do a blog on dump and load strategies at some point in the near future.
- Take advantage of the migration to upgrade to a recent version of 64-bit Progress. It blows me away to see 32-bit deployments in 2015!
- Take advantage of the migration to implement some kind of monitoring.
- Just in the past 6 months I have had to consult at customers where the BI grew to fill the file system and nobody noticed until the database crashed
- Another site running OpenEdge Replication didn’t know that replication had been down for 3 weeks. And few of you realize that this typically means that you have NO DB BACKUPS.
- You can use Protop (Free, but no alerts) or for just a few dollars a day, it can alert you to all these problems and more
- Watch out for java. Java compatibility is very specific in OpenEdge and you should check the Platform Availability Guide. As of OE 11 Java is included in the OE installation but prior to that it was almost certain that you had to install an older version of Java than was installed on your Linux box.
- Don’t forget custom stuff in $DLC
- Properties files in $DLC/properties: not all can be copied over directly to your new version so be careful how you migrate these
- Customized $DLC/protermcap
- Customized $DLC/proword files (and don’t forget to assign them to your new database with proutil -C word-rules)
- Clean up your directory and file system structure! Many old sites still have distinct different file systems for every component of the application and database. You don’t need that anymore and really you’re probably hurting your performance more than helping. I often see new Linux boxes served one giant LUN from the SAN only to have it carved up into 30 file systems. Why? 1% of you may have valid reasons. 99% of you do not.
- Clean up your scripting! One of the biggest challenges of migrating to a new server with a new Progress version and new cleaned-up directory structure is locating all the various scripts that have been written throughout the years. One person puts them in /usr/local/bin; the next guy sticks them directly in /usr/bin; of course there are a pile in /<application directory>/<weird subdirectory>; and my favourite place to find scripts: /archive/data/2004/. Yes, I’ll be sure to check that directory when we migrate
- The previous point ties directly into this one: grep all your code for “OS-COMMAND”, “UNIX …”, “INPUT THROUGH”…i.e. any piece of code that shells out and runs an operating system command.
- You (hopefully) are planning on moving scripts from esoteric locations to standard locations like /usr/local/bin
- The output format of common commands might have changed
- Some commands are operating-system specific. For example “bdf” on HPUX versus “df” on Linux
- DANGEROUS: custom non-Progress code. Many times in 20 years I have worked on migrations where some magical calculation was written in C and used by the Progress application. Of course, the source code for that C program is nowhere to be found and the guy who wrote it retired to Florida 7 years ago. Note that this could be in the form of an executable or a shared library (DEFINE PROCEDURE …EXTERNAL).
- Uncommon: probuild code. Back in the old days (you know, the 90’s), it we would occasionally come across sites who had written custom C-code and linked it directly into the _progres executable. Search for CALL statements in your code
- During your testing, watch out for code that does scp, FTP, EDI or other such data transfers. You don’t want to be pulling production EDI files during your testing!
- Take advantage of the migration to remove and hard-coded paths from your code.
For the UNIX to Linux migration, a lot of the stuff has to be done by hand but some can be scripted
- It’s fairly easy to copy over users, passwords and groups. Usually I like to keep the same UID and GID as it makes transferring files from UNIX to Linux much easier
- You probably won’t be able to copy /etc/passwd and related files directly but you can easily script the transformation
- Watch out not to bring over system accounts!
- Consider enabling Active Directory authentication. It’s fairly easy to do.
- Printers you will probably have to create manually. But since most ChUI Progress applications send their own escape sequences to the printer, you can create them all as “Generic” and just manually edit /etc/cups/printers.conf.
- If any users terminal sessions point directly to the old hostname, consider creating an alias in the DNS. You can point the alias to the old server and slowly migrate the users before go-live then at go-live simply point the DNS alias to the new server.
- Some devices may use the IP address of the old server.
- If you bring over home directories, watch out for .profile versus .bash_profile. UNIX shells tend to use the former while Linux/bash uses the latter.
- While you’re at it, change the ownership of all the users’ .profile to root and permissions to 644. I have seen crafty users use FTP to pull their .profile, comment out the “exec application” line and push back the modified profile
- Watch out for iptables, the Linux firewall. Turn it off if you don’t need it.
- Watch for for SELinux. It has made my life more complicated in the past though I am remiss to suggest that you disable it.
- Watch out for RHEL and CentOS 7. They changed all the admin commands just to piss us off. I have not forgiven them yet.
Did I forget something important? I’m sure I did. Share it in the comments for everyone please.
White Star Software