Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
93 commits
Select commit Hold shift + click to select a range
e00d517
Add files via upload
ido-shm-uel Oct 1, 2024
04cc0b1
Revert last change.
ido-shm-uel Oct 1, 2024
80be6dc
Added assginment, store-into-other NLR tests for every activation fun…
ido-shm-uel Oct 1, 2024
9fbca7b
Add files via upload
ido-shm-uel Oct 1, 2024
f7aee6d
Add files via upload
ido-shm-uel Oct 1, 2024
33f5d5f
Add files via upload
ido-shm-uel Oct 1, 2024
6d1c898
Add files via upload
ido-shm-uel Oct 1, 2024
ee5dd00
0-4-0: Implementing symbolic bound tightening for every activation fu…
ido-shm-uel Oct 1, 2024
85fe8f3
0-4-1: Adding symbolic bound tightening tests for all activation func…
ido-shm-uel Oct 1, 2024
32d7c52
0-5-0: Adding rounding constant for LP relaxation propagation.
ido-shm-uel Oct 1, 2024
5a7c5d8
0-5-1: Implementing forward-backward LP propagation algorithm for all…
ido-shm-uel Oct 1, 2024
79f7a84
0-5-2: Adding forward-backward LP propagation tests for all activatio…
ido-shm-uel Oct 1, 2024
9ee7609
0-6-0: Adding MILP bound types for two backpropagation of bounds algo…
ido-shm-uel Oct 1, 2024
2d9b85a
0-6-1: Adjusting in Options for two new MILP Bound types.
ido-shm-uel Oct 1, 2024
f5c7d1c
0-6-2: Introducing Polygonal Tightenings (output type of two new algo…
ido-shm-uel Oct 1, 2024
b363d05
0-6-3: Adjusting in NLR, LayerOwner for Polygonal Tightenings.
ido-shm-uel Oct 1, 2024
7993b1e
0-6-4: Introducing parameterised bounds in LPFormulator (two new algo…
ido-shm-uel Oct 1, 2024
c888160
0-6-5: Current state of PreimageApproximation + cleanup + corrections.
ido-shm-uel Oct 10, 2024
cd5f2de
0-6-6: New constants for PreimageApproximation (random seeds).
ido-shm-uel Oct 10, 2024
d86c0a3
0-6-7: TEMPORARY CHANGE - use boost v1.85 for Differential Evolution …
ido-shm-uel Oct 10, 2024
4467e09
0-6-8: Revert last change.
ido-shm-uel Oct 14, 2024
88e339a
0-6-9: Adapt DifferentialEvolution from Boost v1.85.
ido-shm-uel Oct 14, 2024
8031a41
0-6-10: Current state of PreimageApproximation (EstimateVolume implem…
ido-shm-uel Oct 14, 2024
dd584e7
0-6-11: Add constants for PreimageApproximation.
ido-shm-uel Oct 24, 2024
cd52524
0-6-12: Current state of PreimageApproximation (implement coordinate …
ido-shm-uel Oct 24, 2024
b719359
0-6-13: Timing PreimageApproximation with coordinate descent (very sl…
ido-shm-uel Oct 24, 2024
f29ccf0
0-6-14: Small Fix (same as 0-5---4).
ido-shm-uel Nov 6, 2024
3ee6903
Merge pull request #1 from NeuralNetworkVerification/master
ido-shm-uel Dec 5, 2024
0069af2
0-6--14: Current progress in PreimageApproximation (0-5--6 fixes, opt…
ido-shm-uel Dec 12, 2024
22f0eaa
Delete src/nlr/PolygonalTightening.h
ido-shm-uel Dec 12, 2024
24f2be1
Delete src/nlr/Test_NetworkLevelReasoner.h
ido-shm-uel Dec 12, 2024
8d0bf19
Delete src/nlr/GlobalConfiguration.h
ido-shm-uel Dec 12, 2024
81e28b7
Delete src/nlr/GlobalConfiguration.cpp
ido-shm-uel Dec 12, 2024
dea156a
Delete src/nlr/Engine.cpp
ido-shm-uel Dec 12, 2024
0d51b8a
Add files via upload
ido-shm-uel Dec 12, 2024
32474bc
Add files via upload
ido-shm-uel Dec 12, 2024
625a5b8
Add files via upload
ido-shm-uel Dec 12, 2024
745cc95
Add files via upload
ido-shm-uel Jan 9, 2025
2d77fb9
Delete src/nlr/GlobalConfiguration.cpp
ido-shm-uel Jan 9, 2025
304efac
Delete src/nlr/GlobalConfiguration.h
ido-shm-uel Jan 9, 2025
35cbdf5
Add files via upload
ido-shm-uel Jan 9, 2025
927302e
Add files via upload
ido-shm-uel Jan 9, 2025
73ce343
Add files via upload
ido-shm-uel Jan 9, 2025
714b4e4
Add files via upload
ido-shm-uel Jan 9, 2025
061bb57
Add files via upload
ido-shm-uel Jan 16, 2025
7d4a5a4
Add files via upload
ido-shm-uel Jan 16, 2025
9dd56fd
Merge pull request #2 from NeuralNetworkVerification/master
ido-shm-uel Feb 3, 2025
03a74a7
Delete src/nlr/AdamOptimizer.h
ido-shm-uel Feb 3, 2025
6ac31b9
Delete src/nlr/CoordinateDescent.h
ido-shm-uel Feb 3, 2025
756ca41
Delete src/nlr/DifferentialEvolution.h
ido-shm-uel Feb 3, 2025
5a3a72a
Delete src/nlr/GradientDescent.h
ido-shm-uel Feb 3, 2025
b093dac
Delete src/nlr/MILPSolverBoundTighteningType.h
ido-shm-uel Feb 3, 2025
bf8513b
Add files via upload
ido-shm-uel Feb 3, 2025
3b2fe08
Add files via upload
ido-shm-uel Feb 3, 2025
20c3577
Add files via upload
ido-shm-uel Feb 3, 2025
570eae8
Add files via upload
ido-shm-uel Feb 3, 2025
c0bd87b
Update GlobalConfiguration.cpp
ido-shm-uel Feb 5, 2025
838b5fb
Update CHANGELOG.md
ido-shm-uel Feb 6, 2025
031fcc8
Update OptionParser.cpp
ido-shm-uel Feb 6, 2025
5f7a24f
smal fix for bug in SumOfInfeasibilitiesManager::proposePhasePatternU…
idan0610 Feb 15, 2025
ca73efb
Sync Fork.
ido-shm-uel Feb 20, 2025
f1718d7
Add files via upload
ido-shm-uel Feb 20, 2025
8804b24
Add files via upload
ido-shm-uel Feb 20, 2025
394fe99
Add files via upload
ido-shm-uel Feb 20, 2025
5bdeaa0
Add files via upload
ido-shm-uel Feb 20, 2025
021054f
Add files via upload
ido-shm-uel Feb 20, 2025
ba85af7
Merge pull request #6 from idan0610/fix_soi_proposal_bug
ido-shm-uel Feb 20, 2025
d526637
Pass Vector<double> &coeffs by ref, remove class PolygonalTightening
ido-shm-uel Feb 26, 2025
9d8c00a
Sync Fork.
ido-shm-uel Feb 27, 2025
5dc464b
Merge pull request #8 from NeuralNetworkVerification/master
ido-shm-uel Jun 12, 2025
5cf4d8e
Starting PMNR: Restore PolygonalTightening class, add backward-pmnr o…
ido-shm-uel Jun 12, 2025
cd522f7
Current progress in PMNR: 1. Implement initializeOutputLayerSymbolicB…
ido-shm-uel Jul 17, 2025
7c08e1e
Current progress in PMNR: 0. (previously)Temporary four new options f…
ido-shm-uel Jul 24, 2025
d5dd47a
Current progress in PMNR: 1. Corrections to initializeSymbolicBounds …
ido-shm-uel Aug 20, 2025
4cf1ca4
Merge pull request #10 from NeuralNetworkVerification/master
ido-shm-uel Aug 20, 2025
6f4b37e
Changed variable names for initializeSymbolicBoundsMaps and related f…
ido-shm-uel Aug 24, 2025
19e2e5d
Merge branch 'master' of https://github.com/ido-shm-uel/Marabou into …
ido-shm-uel Aug 24, 2025
4a00ed2
Fixed changes in nlr/CMakeLists.txt.
ido-shm-uel Aug 24, 2025
146563f
Revert last commit.
ido-shm-uel Aug 24, 2025
32b236c
Sync Fork: Merge pull request #12 from NeuralNetworkVerification/master
ido-shm-uel Sep 17, 2025
bd11626
Correcting SBT, LP for Sigmoid, Bilinear, Softmax.
ido-shm-uel Sep 17, 2025
3b64c75
Undo last commit: Relevant changes will be re-added in a larger commit.
ido-shm-uel Sep 17, 2025
a9d0930
Add files via upload
ido-shm-uel Sep 17, 2025
8ee3fc6
Delete src/LPFormulator.cpp
ido-shm-uel Sep 17, 2025
c552897
Delete src/Layer.cpp
ido-shm-uel Sep 17, 2025
0eef08c
Manually Sync Fork: Towards Proof Minimization (#889), plus: 1. Vario…
ido-shm-uel Oct 16, 2025
bc77f8e
Fix merge conflict.
ido-shm-uel Oct 16, 2025
b875851
Fix CHANGELOG.md.
ido-shm-uel Oct 16, 2025
8af0d34
Remove unused constant.
ido-shm-uel Oct 19, 2025
2184a86
Results of PMNR Optimization Benchmarks: 1. BBPS heuristic final calc…
ido-shm-uel Nov 21, 2025
08bcdac
Manually Sync Fork.
ido-shm-uel Nov 21, 2025
af2bb5d
Merge pull request #18 from ido-shm-uel/master: Manually Sync Fork + …
ido-shm-uel Nov 21, 2025
faee9d2
Three fixes: 1. Limiting PMNR max iterations to 10. 2. Removing unuse…
ido-shm-uel Dec 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,12 @@
- Dropped support for parsing Tensorflow network format. Newest Marabou version that supports Tensorflow is at commit 190555573e4702.
- Fixed bug in the parsing of `transpose` nodes in command line C++ parser.
- Implemented forward-backward abstract interpretation, symbolic bound tightening, interval arithmetic and simulations for all activation functions.
- Implemented backward analysis using preimage-approximation algorithm for `Relu`, `LeakyRelu`, `Sign` and `Bilinear` Layers.
- Added the BaBSR heuristic as a new branching strategy for ReLU Splitting
- Support Sub of two variables, "Mul" of two constants, Slice, and ConstantOfShape in the python onnx parser
- Renamed SmtCore module to SearchTreeHandler
- Implemented backward analysis using INVPROP algorithm with added support for all activation functions.
- Implemented backward analysis using partial multi-neuron relaxation with BBPS-based heuristic for neuron selection.

## Version 2.0.0

Expand Down
19 changes: 19 additions & 0 deletions src/configuration/GlobalConfiguration.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,24 @@ const double GlobalConfiguration::COST_FUNCTION_ERROR_THRESHOLD = 0.0000000001;

const unsigned GlobalConfiguration::SIMULATION_RANDOM_SEED = 1;

const unsigned GlobalConfiguration::VOLUME_ESTIMATION_RANDOM_SEED = 1;
const unsigned GlobalConfiguration::VOLUME_ESTIMATION_ITERATIONS = 25000;
const unsigned GlobalConfiguration::PREIMAGE_APPROXIMATION_OPTIMIZATION_RANDOM_SEED = 1;
const unsigned GlobalConfiguration::PREIMAGE_APPROXIMATION_OPTIMIZATION_MAX_ITERATIONS = 25;
const double GlobalConfiguration::PREIMAGE_APPROXIMATION_OPTIMIZATION_STEP_SIZE = 0.025;
const double GlobalConfiguration::PREIMAGE_APPROXIMATION_OPTIMIZATION_LEARNING_RATE = 0.25;
const double GlobalConfiguration::PREIMAGE_APPROXIMATION_OPTIMIZATION_WEIGHT_DECAY = 0;

const unsigned GlobalConfiguration::INVPROP_MAX_ITERATIONS = 10;
const double GlobalConfiguration::INVPROP_STEP_SIZE = 0.025;
const double GlobalConfiguration::INVPROP_LEARNING_RATE = 0.005;
const double GlobalConfiguration::INVPROP_WEIGHT_DECAY = 0.5;
const double GlobalConfiguration::INVPROP_INITIAL_GAMMA = 0.025;

const unsigned GlobalConfiguration::PMNR_RANDOM_SEED = 1;
const unsigned GlobalConfiguration::PMNR_SELECTED_NEURONS = 2;
const unsigned GlobalConfiguration::PMNR_BBPS_BRANCHING_CANDIDATES = 100;

const bool GlobalConfiguration::USE_HARRIS_RATIO_TEST = true;

const double GlobalConfiguration::SYMBOLIC_TIGHTENING_ROUNDING_CONSTANT = 0.00000000001;
Expand Down Expand Up @@ -118,6 +136,7 @@ const bool GlobalConfiguration::WRITE_JSON_PROOF = false;

const unsigned GlobalConfiguration::BACKWARD_BOUND_PROPAGATION_DEPTH = 3;
const unsigned GlobalConfiguration::MAX_ROUNDS_OF_BACKWARD_ANALYSIS = 10;
const unsigned GlobalConfiguration::MAX_ROUNDS_OF_PMNR_BACKWARD_ANALYSIS = 10;

const bool GlobalConfiguration::ANALYZE_PROOF_DEPENDENCIES = true;
const bool GlobalConfiguration::MINIMIZE_PROOF_DEPENDENCIES = true;
Expand Down
50 changes: 49 additions & 1 deletion src/configuration/GlobalConfiguration.h
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,51 @@ class GlobalConfiguration
// Random seed for generating simulation values.
static const unsigned SIMULATION_RANDOM_SEED;

// Random seed for EstimateVolume procedure (PreimageApproximation).
static const unsigned VOLUME_ESTIMATION_RANDOM_SEED;

// Number of iterations for EstimateVolume procedure (PreimageApproximation).
static const unsigned VOLUME_ESTIMATION_ITERATIONS;

// Random seed for PreimageApproximation optimization.
static const unsigned PREIMAGE_APPROXIMATION_OPTIMIZATION_RANDOM_SEED;

// Maximum iterations for PreimageApproximation optimization.
static const unsigned PREIMAGE_APPROXIMATION_OPTIMIZATION_MAX_ITERATIONS;

// Step size for PreimageApproximation optimization.
static const double PREIMAGE_APPROXIMATION_OPTIMIZATION_STEP_SIZE;

// Learning rate for PreimageApproximation optimization.
static const double PREIMAGE_APPROXIMATION_OPTIMIZATION_LEARNING_RATE;

// Weight decay for PreimageApproximation optimization.
static const double PREIMAGE_APPROXIMATION_OPTIMIZATION_WEIGHT_DECAY;

// Maximum iterations for INVPROP optimization.
static const unsigned INVPROP_MAX_ITERATIONS;

// Step size for INVPROP optimization.
static const double INVPROP_STEP_SIZE;

// Learning rate for INVPROP optimization.
static const double INVPROP_LEARNING_RATE;

// Weight decay for INVPROP optimization.
static const double INVPROP_WEIGHT_DECAY;

// Initial gamma values for INVPROP optimization.
static const double INVPROP_INITIAL_GAMMA;

// Random seed for PMNR (with randomized hyperplanes).
static const unsigned PMNR_RANDOM_SEED;

// Number of selected neurons for PMNR (with heuristically selected hyperplanes).
static const unsigned PMNR_SELECTED_NEURONS;

// Number of candidates for PMNR-BBPS branching points.
static const unsigned PMNR_BBPS_BRANCHING_CANDIDATES;

// How often should projected steepest edge reset the reference space?
static const unsigned PSE_ITERATIONS_BEFORE_RESET;

Expand Down Expand Up @@ -263,6 +308,10 @@ class GlobalConfiguration
*/
static const unsigned MAX_ROUNDS_OF_BACKWARD_ANALYSIS;

/* How many rounds of backward analysis to perform for PMNR algorithm?
*/
static const unsigned MAX_ROUNDS_OF_PMNR_BACKWARD_ANALYSIS;

/* Analyze lemma dependencies when producing proofs
*/
static const bool ANALYZE_PROOF_DEPENDENCIES;
Expand All @@ -271,7 +320,6 @@ class GlobalConfiguration
*/
static const bool MINIMIZE_PROOF_DEPENDENCIES;


#ifdef ENABLE_GUROBI
/*
The number of threads Gurobi spawns
Expand Down
3 changes: 2 additions & 1 deletion src/configuration/OptionParser.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,8 @@ void OptionParser::initialize()
&( ( *_stringOptions )[Options::MILP_SOLVER_BOUND_TIGHTENING_TYPE] ) )
->default_value( ( *_stringOptions )[Options::MILP_SOLVER_BOUND_TIGHTENING_TYPE] ),
"The MILP solver bound tightening type: "
"lp/backward-once/backward-converge/lp-inc/milp/milp-inc/iter-prop/none." )
"lp/backward-once/backward-converge/backward-preimage-approx/backward-pmnr/lp-inc/milp/"
"milp-inc/iter-prop/none." )
#endif
;

Expand Down
4 changes: 4 additions & 0 deletions src/configuration/Options.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -209,6 +209,10 @@ MILPSolverBoundTighteningType Options::getMILPSolverBoundTighteningType() const
return MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_ONCE;
if ( strategyString == "backward-converge" )
return MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_CONVERGE;
if ( strategyString == "backward-preimage-approx" )
return MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PREIMAGE_APPROX;
if ( strategyString == "backward-pmnr" )
return MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PMNR;
else if ( strategyString == "milp" )
return MILPSolverBoundTighteningType::MILP_ENCODING;
else if ( strategyString == "milp-inc" )
Expand Down
29 changes: 29 additions & 0 deletions src/engine/Engine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1598,6 +1598,8 @@ void Engine::performMILPSolverBoundedTightening( Query *inputQuery )
case MILPSolverBoundTighteningType::LP_RELAXATION_INCREMENTAL:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_ONCE:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_CONVERGE:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PREIMAGE_APPROX:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PMNR:
_networkLevelReasoner->lpRelaxationPropagation();
break;
case MILPSolverBoundTighteningType::MILP_ENCODING:
Expand Down Expand Up @@ -1662,6 +1664,31 @@ void Engine::performAdditionalBackwardAnalysisIfNeeded()
printf( "Backward analysis tightened %u bounds\n", tightened );
}
}

if ( _milpSolverBoundTighteningType ==
MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PREIMAGE_APPROX )
{
performMILPSolverBoundedTightening( &( *_preprocessedQuery ) );
unsigned tightened = performSymbolicBoundTightening( &( *_preprocessedQuery ) );
if ( _verbosity > 0 )
printf( "Backward analysis tightened %u bounds\n", tightened );
}

if ( _milpSolverBoundTighteningType == MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PMNR )
{
unsigned iter = 1;
unsigned tightened = performSymbolicBoundTightening( &( *_preprocessedQuery ) );
if ( _verbosity > 0 )
printf( "Backward analysis tightened %u bounds\n", tightened );
while ( tightened && iter < GlobalConfiguration::MAX_ROUNDS_OF_PMNR_BACKWARD_ANALYSIS )
{
performMILPSolverBoundedTightening( &( *_preprocessedQuery ) );
tightened = performSymbolicBoundTightening( &( *_preprocessedQuery ) );
if ( _verbosity > 0 )
printf( "Backward analysis tightened %u bounds\n", tightened );
++iter;
}
}
}

void Engine::performMILPSolverBoundedTighteningForSingleLayer( unsigned targetIndex )
Expand Down Expand Up @@ -1689,6 +1716,8 @@ void Engine::performMILPSolverBoundedTighteningForSingleLayer( unsigned targetIn
return;
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_ONCE:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_CONVERGE:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PREIMAGE_APPROX:
case MILPSolverBoundTighteningType::BACKWARD_ANALYSIS_PMNR:
case MILPSolverBoundTighteningType::ITERATIVE_PROPAGATION:
case MILPSolverBoundTighteningType::NONE:
return;
Expand Down
23 changes: 11 additions & 12 deletions src/engine/MILPEncoder.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@

#include "MILPEncoder.h"

#include "DeepPolySoftmaxElement.h"
#include "FloatUtils.h"
#include "GurobiWrapper.h"
#include "Layer.h"
#include "TimeUtils.h"

MILPEncoder::MILPEncoder( const ITableau &tableau )
Expand Down Expand Up @@ -629,14 +629,14 @@ void MILPEncoder::encodeSoftmaxConstraint( GurobiWrapper &gurobi, SoftmaxConstra
}
if ( !useLSE2 )
{
symbolicLowerBias = NLR::DeepPolySoftmaxElement::LSELowerBound(
sourceMids, sourceLbs, sourceUbs, i );
symbolicLowerBias =
NLR::Layer::LSELowerBound( sourceMids, sourceLbs, sourceUbs, i );
if ( !FloatUtils::wellFormed( symbolicLowerBias ) )
wellFormed = false;
for ( unsigned j = 0; j < size; ++j )
{
double dldj = NLR::DeepPolySoftmaxElement::dLSELowerBound(
sourceMids, sourceLbs, sourceUbs, i, j );
double dldj =
NLR::Layer::dLSELowerBound( sourceMids, sourceLbs, sourceUbs, i, j );
if ( !FloatUtils::wellFormed( dldj ) )
wellFormed = false;
terms.append(
Expand All @@ -646,14 +646,14 @@ void MILPEncoder::encodeSoftmaxConstraint( GurobiWrapper &gurobi, SoftmaxConstra
}
else
{
symbolicLowerBias = NLR::DeepPolySoftmaxElement::LSELowerBound2(
sourceMids, sourceLbs, sourceUbs, i );
symbolicLowerBias =
NLR::Layer::LSELowerBound2( sourceMids, sourceLbs, sourceUbs, i );
if ( !FloatUtils::wellFormed( symbolicLowerBias ) )
wellFormed = false;
for ( unsigned j = 0; j < size; ++j )
{
double dldj = NLR::DeepPolySoftmaxElement::dLSELowerBound2(
sourceMids, sourceLbs, sourceUbs, i, j );
double dldj =
NLR::Layer::dLSELowerBound2( sourceMids, sourceLbs, sourceUbs, i, j );
if ( !FloatUtils::wellFormed( dldj ) )
wellFormed = false;
terms.append(
Expand All @@ -667,15 +667,14 @@ void MILPEncoder::encodeSoftmaxConstraint( GurobiWrapper &gurobi, SoftmaxConstra
// Upper-bound
wellFormed = true;
double symbolicUpperBias =
NLR::DeepPolySoftmaxElement::LSEUpperBound( sourceMids, targetLbs, targetUbs, i );
NLR::Layer::LSEUpperBound( sourceMids, targetLbs, targetUbs, i );
if ( !FloatUtils::wellFormed( symbolicUpperBias ) )
wellFormed = false;
terms.clear();
terms.append( GurobiWrapper::Term( 1, Stringf( "x%u", targetVariables[i] ) ) );
for ( unsigned j = 0; j < size; ++j )
{
double dudj = NLR::DeepPolySoftmaxElement::dLSEUpperbound(
sourceMids, targetLbs, targetUbs, i, j );
double dudj = NLR::Layer::dLSEUpperbound( sourceMids, targetLbs, targetUbs, i, j );
if ( !FloatUtils::wellFormed( dudj ) )
wellFormed = false;
terms.append( GurobiWrapper::Term( -dudj, Stringf( "x%u", sourceVariables[j] ) ) );
Expand Down
8 changes: 7 additions & 1 deletion src/engine/MILPSolverBoundTighteningType.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,14 @@ enum class MILPSolverBoundTighteningType {
// Perform backward analysis
BACKWARD_ANALYSIS_ONCE = 5,
BACKWARD_ANALYSIS_CONVERGE = 6,
// Perform backward analysis using the PreimageApproximation Algorithm (arXiv:2305.03686v4
// [cs.SE])
BACKWARD_ANALYSIS_PREIMAGE_APPROX = 7,
// Perform backward analysis using PMNR with INVPROP and BBPS-based neuron selection
// (arXiv:2302.01404v4 [cs.LG], arXiv:2405.21063v3 [cs.LG]).
BACKWARD_ANALYSIS_PMNR = 8,
// Option to have no MILP bound tightening performed
NONE = 10,
NONE = 9,
};

#endif // __MILPSolverBoundTighteningType_h__
124 changes: 124 additions & 0 deletions src/engine/PolygonalTightening.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
/********************* */
/*! \file PolygonalTightening.h
** \verbatim
** Top contributors (to current version):
** Duligur Ibeling, Guy Katz, Ido Shmuel
** This file is part of the Marabou project.
** Copyright (c) 2017-2024 by the authors listed in the file AUTHORS
** in the top-level source directory) and their institutional affiliations.
** All rights reserved. See the file COPYING in the top-level source
** directory for licensing information.\endverbatim
**
** [[ Add lengthier description here ]]
**/

#ifndef __PolygonalTightening_h__
#define __PolygonalTightening_h__

#include "FloatUtils.h"
#include "MStringf.h"
#include "Map.h"
#include "NeuronIndex.h"

#include <cstdio>

class PolygonalTightening
{
public:
enum PolygonalBoundType {
LB = 0,
UB = 1,
};

PolygonalTightening( Map<NLR::NeuronIndex, double> neuronToCoefficient,
double value,
PolygonalBoundType type )
: _neuronToCoefficient( neuronToCoefficient )
, _value( value )
, _type( type )
{
}

/*
The coefficient of each neuron.
*/
Map<NLR::NeuronIndex, double> _neuronToCoefficient;

/*
Its new value.
*/
double _value;

/*
Whether the tightening tightens the
lower bound or the upper bound.
*/
PolygonalBoundType _type;

/*
Equality operator.
*/
bool operator==( const PolygonalTightening &other ) const
{
bool allFound = true;
for ( const auto &pair : _neuronToCoefficient )
{
bool currentFound = false;
for ( const auto &otherPair : other._neuronToCoefficient )
{
currentFound |= ( pair.first._layer == otherPair.first._layer &&
pair.first._neuron == otherPair.first._neuron &&
pair.second == otherPair.second );
}
allFound &= currentFound;
}
bool result = allFound && _value == other._value && _type == other._type &&
_neuronToCoefficient.size() == other._neuronToCoefficient.size();
return result;
}

/*
Get matching coefficent for a NeuronIndex,
return 0 if the NeuronIndex is not present.
*/
double getCoeff( NLR::NeuronIndex index ) const
{
if ( _neuronToCoefficient.exists( index ) )
return _neuronToCoefficient[index];
return 0;
}

void dump() const
{
String output = "PolygonalTightening: ";
unsigned count = 0;
for ( const auto &pair : _neuronToCoefficient )
{
double coeff = pair.second;
if ( FloatUtils::isZero( coeff ) )
continue;

if ( count )
{
output += Stringf( "%s %.2lf neuron%u_%u ",
FloatUtils::isPositive( coeff ) ? "+" : "-",
FloatUtils::abs( coeff ),
pair.first._layer,
pair.first._neuron );
}
else
{
output +=
Stringf( "%.2lf neuron%u_%u ", coeff, pair.first._layer, pair.first._neuron );
}
++count;
}
if ( count == 0 )
{
output += Stringf( "%.2lf ", 0 );
}
output += Stringf( "%s %.2lf", _type == LB ? ">=" : "<=", _value );
printf( "%s\n", output.ascii() );
}
};
#endif // __PolygonalTightening_h
Loading
Loading