We propose a new prescriptive analytics model based on robust satisficing to determine the here-and-now decision that would achieve a target expected reward as well as possible under both risk ambiguity and prediction inaccuracy. The reward function of the decision model depends on some uncertain parameters whose outcomes can be partially predicted from side information. We focus on a linear prediction model, where the accuracy of the prediction depends on how well the regression coefficients can be estimated from data. The robust satisficing model mitigates target shortfalls by incorporating the Wasserstein distance metric for assessing risk ambiguity, and the discrepancy metrics for evaluating prediction inaccuracy associated with the estimated regression coefficients. We provide a target-based confidence guarantee for the robust satisficing model and formulate tractable models when the reward function is a saddle function, and for a two-stage linear optimization problem. We also propose tractable robust satisficing models when the prediction is decision-dependent. We use real data in case studies featuring a wine portfolio investment problem and a multi-product pricing problem. Through these numerical studies, we elucidate the benefits of our robust satisficing model over the predict-then-optimize approach; when evaluated on the actual distribution, the robust satisficing models yield solutions with lower risk, and with suitably chosen targets, they could also achieve a higher expected reward. We observe consistent and significant improvement over the benchmarks, and the improvements are more pronounced when there is limited data.
This is a joint work with Qinshen Tang, Minglong Zhou, Taozeng Zhu.