MIPLIB Image Comparison (MIC)

This website accompanies a paper that is currently under review for publication consideration. A draft of the paper may be downloaded from SSRN.

The work presented here involves comparing and measuring the similarity among mixed-integer programming (MIP) models and instances. There are two pillars to this analysis, the first of which is a collection of well documented MIP instances. This collection is derived from MIPLIB 2017, an online repository of MIP instances that are often used for solver benchmarking tests.

The second pillar of this analysis is the methodology for MIP comparison. Here, MIPs are compared based on the structure of the constraint coefficient matrices (CCMs) for one or more instances of a model. The CCM is the matrix \(\mathbf{A}\) in an integer program of the form: $$\max\{\mathbf{cx} : \mathbf{Ax} \le \mathbf{b}, \mathbf{x} \in \mathbb{Z}^{+}\}.$$

The methodology for making these comparisons involves the following three steps:

  1. CCM structure is represented as an image by treating coefficient matrix entries as pixel values in a digital image.
  2. Automated feature engineering (through the use of deep learning techniques) is used to extract feature vectors from images.
  3. The Euclidean distance between two feature vectors is treated as a measure of the similarity between the corresponding images. This distance is referred to as image-based structural similarity (ISS).

In this analysis, the methodology described above is applied to a subset of the instances in MIPLIB 2017. MIPs are then compared according to the ISS values between their corresponding images. The ability to compare and relate MIPs in this way bears a number of useful applications. Most notably, problems with similar CCM structures may benefit from similar solution approaches. Additionally, a comparison based on CCM structure is agnostic to instance application, allowing for connections to be drawn between problems which may otherwise be viewed as unrelated.

As a point of comparison, instance similarity rankings from MIPLIB 2017 are also provided. In the MIPLIB, similarity between MIPs is measure according to a set of 110 manually defined instance statistics. Viewing the MIC and MIPLIB results side by side helps to demonstrate the potential benefits of an image-based comparison appraoch.

Instances were originally downloaded from MIPLIB 2017 for use in this analysis on February 28, 2019.

For those attending the 2019 INFORMS Annual Meeting in Seattle, this work will be presented in the "CS and OR Interface" session (Session Number SD92). This session is scheduled for Sunday, October 20, from 4:30 pm to 6:00 pm, and will be located in 92 - Sheraton - Grand C.

Instances

The collection of 992 MIP instances considered in the MIC can be found on the Instances page. Click on any name to view a summary of the MIC results for the corresponding instance. Note that there are 73 instances in MIPLIB 2017 which were not considered in the MIC. These instances, along with details on why they were excluded from the MIC, are also listed on this page. To view MIPLIB's analyses visit the Collection page for MIPLIB 2017.

Model Groups

MIPLIB 2017 defines a collection of model groups, each of which contains a subset of no more than five instances. Instances in the same model group share a common submitter, and were identified as having been generated by either the same MIP model or by variations of a model. The set of 238 model groups considered by the MIC can be found on the Model Groups page. Click on any name to view a summary of the MIC result for the corresponding model group.

About

Additional information regarding the MIC and its companion paper are provided on the About page. Citation information, as well as links to each of the MIC authors' personal webpages, can also be found here.