You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A while ago I wrote code to populate arrays which are (direct or indirect) arguments to a Java entry point. For example:
class A {
int[] arr1;
}
class B {
void testme(A a, long[] arr2) { ... }
}
In this case with --function B.testme, an array will be nondeterministically populated for both direct parameter arr2 and indirect parameter arr1.
It's done by generating a nondet int for the array's length, using ASSUME statements to bound it to be >= 0 and <= the command line argument java-max-input-array-length (default 5). Then if that variable were named nondet_arr_length a cpp_new allocation is generated with type int[nondet_arr_length] and compiled.
This results in a large number of constraints being generated, because the array theory required to describe the effects of a series of array operations is quite verbose. By contrast when the array has a fixed known dimension it can simply be decomposed into a finite if-statement casing over the possibilities (so x[y] = 1 becomes "y == 0 : x_0 = 1 : y == 1 ? x_1 = 1 : ...)
In the Java case we often (nearly always?) have a low upper bound on such arrays' lengths. Therefore I suggest we should investigate the effects on formula complexity of always allocating the maximum given array length, resulting in only slight waste in that conditionals will be introduced that can never be satisifed (e.g. idx == 4 ? ... when the actual array length doesn't allow that), compared to the always-expensive variable-length array theory.
The text was updated successfully, but these errors were encountered:
This would actually be in line with what's being done when using --model-argc-argv in goto-instrument. (I'm starting to wonder whether that option should get promoted to CBMC as it complements java-max-input-array-length.)
smowton
added a commit
to smowton/cbmc
that referenced
this issue
Apr 18, 2017
This replaces the variable-length arrays produced by the Java object
factory with fixed-length ones annotated with a variable length member.
This allows the backend to use the much cheaper fixed-length array lowering
rather than the full variable-length array theory, while retaining the
ability for the effective length to vary. In effect this creates a bounded-
length array construct.
Fixesdiffblue#746
A while ago I wrote code to populate arrays which are (direct or indirect) arguments to a Java entry point. For example:
In this case with
--function B.testme
, an array will be nondeterministically populated for both direct parameterarr2
and indirect parameterarr1
.It's done by generating a nondet int for the array's length, using
ASSUME
statements to bound it to be >= 0 and <= the command line argumentjava-max-input-array-length
(default 5). Then if that variable were namednondet_arr_length
acpp_new
allocation is generated with typeint[nondet_arr_length]
and compiled.This results in a large number of constraints being generated, because the array theory required to describe the effects of a series of array operations is quite verbose. By contrast when the array has a fixed known dimension it can simply be decomposed into a finite if-statement casing over the possibilities (so x[y] = 1 becomes "y == 0 : x_0 = 1 : y == 1 ? x_1 = 1 : ...)
In the Java case we often (nearly always?) have a low upper bound on such arrays' lengths. Therefore I suggest we should investigate the effects on formula complexity of always allocating the maximum given array length, resulting in only slight waste in that conditionals will be introduced that can never be satisifed (e.g. idx == 4 ? ... when the actual array length doesn't allow that), compared to the always-expensive variable-length array theory.
The text was updated successfully, but these errors were encountered: