Adding Apache/Httpd environment variables for PHP/Web on RHEL/CentOS

This is related to PDO Informix.

I need to run CLI and web page at the same time. PDO Informix look for Informix SDK path (strangely); I think it is because it uses SDK libraries instead of importing all the functions in PDO itself.

Creating Apache Path info can be kind of tricky. This solution may work only for me, though.

#1. Add a path information for all users. It is for the PHP CLI. From web server, this is not visible.

$ cd /etc/profile.d
$ vi informix.sh
And add the following lines:
export INFORMIXDIR=”/opt/IBM/informix/4.10″
export PATH=$PATH:$INFORMIXDIR

#2. For Apache env variables, RHEL/CentOS provides /etc/sysconfig/httpd
$ vi httpd
And add the following line:
INFORMIXDIR=”/opt/IBM/informix/4.10″

Restart Apache server.

When I looked for the information, many people indicates that I need to export this value. I tried it but doesn’t work for me.
Also many resources point to SetEnv in httpd.conf. It shows the variables phpinfo() but it doesn’t actually work.

Installing PHP PDO Informix on RHEL/CentOS

When I install PHP, I realized I applied the following command.
$ yum install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo

On a CentOS, Red Hat Enterprise Linux or Fedora machine and their derivitives, this is a simple as running this command, either using sudo or as the root user:
$ yum install php-pear
The pecl command can then be found at /usr/bin/pecl. On other Linux distributions it should be a similar process.

It doesn’t mean you’re not read to compile. Then again, here’s the issue.
$ yum install php-devel

1) download last version of PDO
$ mkdir pdo
$ cd pdo
$ wget http://pecl.php.net/get/PDO_INFORMIX-1.3.3.tgz

2) uncompress
$ tar zxf PDO_INFORMIX-1.3.3.tgz
$ cd PDO_INFORMIX-1.3.3/

3) set your INFORMIXDIR
$ export INFORMIXDIR=/opt/IBM/informix/4.10

3) execute phpize
$ phpize
Configuring for:
PHP Api Version: 20090626
Zend Module Api No: 20090626
Zend Extension Api No: 220090626
configure.in:3: warning: prefer named diversions
configure.in:3: warning: prefer named diversions

3) execute the ./configure
$ ./configure
configure: loading site script /usr/share/site/x86_64-unknown-linux-gnu
checking for grep that handles long lines and -e… /usr/bin/grep
checking for egrep… /usr/bin/grep -E
checking for a sed that does not truncate output… /usr/bin/sed
checking for gcc… gcc

Add a module file under
$ cd /etc/php.d
$ vi 30-pdo_informix.ini
(Prefix number is optional)
And add the line:
extension=pdo_informix.so

Restart Apache server.

Installing Informix Client SDK on RHEL/CentOS

As for Informix user, you have to jump through a lot of hoops since it has become a dinosaur and due to the lack of support from IBM.

I prefer PHP to Perl in writing a shell program and connecting Informix was full of wonder. I still don’t understand why IBM keeps insisting that a simple Informix db library has to look for SDK. It is just heart-wrenching but it is what it is, I guess.

First off, installing Informix client sdk itself turned out to be pretty daunting task. Nothing is out of box but I wasn’t prepared enough.

Downloading Informix Client SDK from IBM website (it’s not easy to navigate to find what you need — this is another challenge).

Here’s the issues I faced:
Exception “JRE libraries are missing or not compatible” occurs while installing Content Platform Engine 5.2.1 on Linux

Preparing to install…
Extracting the JRE from the installer archive…
Unpacking the JRE…
Extracting the installation resources from the installer archive…
Configuring the installer for this system’s environment…
Launching installer…
JRE libraries are missing or not compatible….
Exiting….

It is casued by ld-linux.so.2 library is missing on Linux.
To resolve The Problem, install it with following command:
$ yum install ld-linux.so.2

And another cause is insufficient permissions in the /tmp directory. In environments where obtaining the required permissions may not be straightforward due to how the server is locked down, security policies, etc., there is a simple workaround. You need to create a new “temp” directory in a location where you do have the proper permissions.

Example:
$ mkdir /opt/informix/tmp
$ export IATEMPDIR=/opt/informix/tmp

(You have to apply this command if you terminate the session).

Thanks to this link: https://www.coreblox.com/blog/2018/2/ca-access-gateway-install-error-jre-libraries-are-missing-or-not-compatible

I faced additional issue with another server:

“One or more prerequisite system libraries are not installed on your computer.
Install libncurses.so.5 and then restart the IBM Informix installation
program.”

In this case, you need to install
$ yum install libncurses*

DB Query for Pages that use a Certain Fieldable Panel Pane (Drupal 7)

In Drupal 7, the DB structure of the panel-related tables has been hard to grasp (I am glad to see Drupal 8 cleaned up a lot of clutters and simplified its structure). If you look into it, it makes sense but it can be sketchy to figure something out. One of the things I looked into recently was to find node pages that use certain fieldable panel panes. Since a particular FFP creates some layout issues so that how many pages, I needed to search how many pages use it by querying DB directly. I’m on Pantheon; I can apply this query through a terminus + drush command.

$ terminus drush site.env -- sql-query "SELECT n.nid, ua.alias, sp.*, pp.pid, pp.did
FROM panelizer_entity pe
JOIN node n ON pe.entity_id = n.nid AND pe.entity_type = 'node' 
  AND n.vid = pe.revision_id
JOIN panels_pane pp ON pe.did = pp.did
JOIN (
  SELECT CONCAT('vuuid:', fv.vuuid) subtype, ff.fpid 
  FROM fieldable_panels_panes ff
  JOIN fieldable_panels_panes_revision fv 
    ON ff.vid = fv.vid AND ff.bundle = 'ffp name goes here..') sp 
  ON pp.subtype = sp.subtype
JOIN (
  SELECT alias, SUBSTRING(source, LOCATE('/', source)+1) entity_id 
  FROM url_alias WHERE source LIKE 'node%') ua
  ON n.nid = ua.entity_id
WHERE n.status = 1 
GROUP BY n.nid, pp.pid"

JavaScript to Shuffle a Deck of Cards

It was just another day. I came across this brain teaser and I decided to do it in JS. Namely, a javascript function to shuffle a deck of cards. I express the number as in an array.

const leaf = 52; // count of cards
var deck = [];
let j = Math.floor( (Math.random() * leaf) % leaf);

for (i=1; i<=leaf; i++) {
  while (deck[j]) {
    j = Math.floor( (Math.random() * leaf) % leaf);
  }
  deck[j] = i;
}
console.log( deck );

PHP Array Reverse by array_walk() (Without Loops)

I’ve procrastinated writing the following function over 11 years. It is simple, though; I have a back story about it. Thanks to that, I have been hanging onto it although it’s so simple. This will load off my mind. It’s better late than 11 year sorry.

$arr_str = ['a', 'b', 'c', 'd'];
$arr_rev = [];
$arr_len = count($arr_str);
array_walk($arr_str, function($item, $key) use (&$arr_rev, &$arr_len) {
  $arr_rev[--$arr_len] = $item;
});
ksort($arr_rev);

PhantomJS – Screen capture of auth protected page (& keep sessions)

Recently I need to get a screen capture of pages that are auth protected. I realized PhantomJS has a file instance with which you can read/write cookies. Usually PhantomJS performs one page so that it seems to make it hard to create multiple steps process. Of course there’re examples:

http://code-epicenter.com/how-to-login-amazon-using-phantomjs-working-example/

But opening a new page is a different story. Eventually I figured out using the global PhantomJS state. And the following is result:

/**
 * The purpose of the source code is to get a screen capture of a page
 * of auth protected page by running PhantomJS command line.
 * It performs form authentition if the session is not alive.
 * If cookie file exists, it uses the session. 
 * NOTE: I haven't added any arguments for this action.
 */
var webpage = require('webpage');
var fs = require('fs');
var system = require('system');
var page, loginpage;

// Phantomjs global config
phantom.cookiesEnabled = true;
phantom.javascriptEnabled = true;

// Variables settings
var cookie = 'path/to/cookie.json'; // Cookie file location
var max_login = 3; // Maximum login attempt
var login_attempt = 0; // Login attempt count
var login_url = 'https://test.com/login'; // Login page URL
var logout_url = 'https://test.com/logout'; // Logout page URL
var page_url = 'https://test.com/protected'; // Page to capture

/**
 * Add cookies before opening the page
 */
function addCookieInfo() {
  Array.prototype.forEach.call(JSON.parse(fs.read(cookie)), function(param) {
    phantom.addCookie(param);
  });
}

/**
 * Run login page and try form authentication
 */
function runLogin() {
 if (loginPage === 'object') {
    loginPage.close();
  }
  if (loginAttempt < 2) {
    system.stderr.writeLine('Reached max login attempt count.');
    phantom.exit();
  }
  else {
    loginAttempt++;
    loginPage = webPage.create();
    loginPage.open(loginUrl, function(status) {
      if (status === "success") {
        system.stderr.writeLine('form started');
        loginPage.evaluate(function() {
          document.getElementById("name").value = "username";
          document.getElementById("pass").value = "password";
          document.getElementById("login-form").submit();
        });     
        loginPage.onLoadFinished = function(status) { 
          if (status === 'success') {
            if (!phantom.state) {
              phantom.state = 'no-session';
            }
            if (phantom.state === 'no-session') {
              fs.write(cookie, JSON.stringify(phantom.cookies), "w");
              phantom.state = 'run-state';
              setTimeout(runPage, 500);
            }
          }
        };
      }
    });
  }
}

/**
 * Run page to get screen capture
 */
function runPage() {
  if (page === 'object') {
    page.close();
  } 
  page = webPage.create();
  addCookieInfo();
  page.open(url, function(status) {
    if (status !== 'success') {
      system.stderr.writeLine('Unsuccessful loading of: ' + url + ' (status=' + status + ').');
      system.stderr.writeLine('Content: ' + page.content);
      if (page.content) {
        fs.write(outfile, "error", 'w');
      }
      phantom.exit();
    }
    else {
      if (phantom.state === 'run-state') {
        window.setTimeout(function() {
          if (thumbnailFile) {
            page.render(thumbnailFile);
          }
          if (page.content) {
            fs.write(outfile, page.content, 'w');
          }
          page.render("page_service.png");
          phantom.exit();
        }, timeout);
      }

    }   
  });

  page.onResourceReceived = function(response) {  
    if (response.stage == 'end'){
      return;
    }
    if (response.url == url) {
      if (response.status == 403) {
        phantom.state = 'no-session';
      }
      else {
        phantom.state = 'run-state';
        response.headers.forEach(function(header){
          system.stdout.writeLine('HEADER:' + header.name + '=' + header.value);
        });
        system.stdout.writeLine('STATUS:' + response.status);
        system.stdout.writeLine('STATUSTEXT:' + response.statusText);
        system.stdout.writeLine('CONTENTTYPE:' + response.contentType);
      }
    }
  };
  /**
   * onLoadFinished callback
   * Check the status of login page and set state
   * If state is no-session and page is success, write cookies.
   */
  page.onLoadFinished = function(status) {
    if (status === 'success') {
      if (phantom.state == 'no-session') {
        removeCookieFile();
        setTimeout(runLogin, 500);
      }
    }
  };
}

// Main
phantom.state = 'no-state';
if (!fs.isFile(cookie)) {
  runLogin();
}
else {
  runPage();
}

In order to run code, make the following steps:

  1. Download and save phantomjs
  2. Copy the source code, and save it to a folder
  3. On command line, navigate to the file and run the following:
 $ /path/to/bin/phantomjs /path/to/sessions.js

Drupal 8 search with Elasticsearch

  • Install module
  • Create an index on Elasticsearch engine
  • Create a view
  • Attach facet filters

The Search API module needs Elasticsearch PHP library which provides the abstract layer of Elasticsearch Connector module in Drupal. This can be installed through composer.

$ composer require nodespark/des-connector:5.x-dev

$ composer update

Add Elasticsearch

Go to Configuration > Search and metadata > Elasticsearch Connector.

Click “Add Cluster” and configure the Server.

 

Go to Configuration > Search and metadata > Elasticsearch Connector. Click “Add Cluster” and configure the server.

As default, it is “elasticsearch.” If you want to edit the cluster/node information, edit elasticsearch.yml file.

 

Go to  Configuration > Search and metadata > Search API.

Click “Add Index”

 

 

Selecting the “Content” data source, options are presented to select which bundles are to be indexed

 

Before search can be performed, select all the fields that should be available to search. That is configured in the “Fields” tab.

 

 

Last step is to add additional

’processors’.

This includes items such as:

  • Content access
  • Ignore case (case-insensitive search)
  • Tokenizer (split into individual words)

 

Once fields and processors are set up, go back to

the ”View” tab. It will show the status of the index, and at this point, the content is ready to be indexed if not already set to index immediately when the index is created.

Indexing of content is done via cron and any new

content will get indexed then.

 

  1. Go to Structure > Add view
  2. Provide a view name and select your index name as the view source
  3. Under Format > Show, select “Rendered Entity”

Or, you can select “Fields” and add each field you would like to display  in the Fields section.

  1. Under Filter Criteria, add “Fulltext search” field and expose the field for filtering
  2. Add Sort Criteria: The best one to use is “Relevance (desc)”

With the search page setup now, we want to add facets to let users filter down content. Navigate to Configuration > Search and metadata > Facets then click “Add facet”

 

Last step, place the newly created Facet blocks on the Block Layout page

 

 

  • The Elastic Stack (Elasticsearch, Logstash, and Kibana) can interactively search, discover, and analyze to gain insights that improve the analysis of time-series data.
  • No need for upfront schema definition. Schema can be defined per type for customization of indexing process.
  • Has an edge in the cloud environment – this is depend upon SolrCloud advancement.
  • Has advantages of search for enterprise or higher-ed level where analytics plays a bigger role.

Create an Epoch time string with milliseconds

Some API requires epoch time with milliseconds which PHP time() function does produce as default. And microtime() function divides up to two time values if not use flag. (If there’s an additional flag, it may be better?) The following is a couple of walk-arounds. NOTE: The first one is what I’ve found on StackOverflow.

function getEpochTimeWithMilsec() {
  $mt = explode(' ', microtime());
  return ((int)$mt[1]) * 1000 + ((int)round($mt[0] * 1000));
}
function getEpochTimeWithMilsec() {
  $mt = microtime(true) * 1000;
  return (int)round($mt);
}