Handle partially optimized out values similarly to unavailable values

This fixes PR symtab/14604, PR symtab/14605, and Jan's test at
https://sourceware.org/ml/gdb-patches/2014-07/msg00158.html, in a tree
with bddbbed reverted:

 2014-07-22  Pedro Alves  <palves@redhat.com>

 	* value.c (allocate_optimized_out_value): Don't mark value as
 	non-lazy.

The PRs are about variables described by the DWARF as being split over
multiple registers using DWARF piece information, but some of those
registers being marked as optimised out (not saved) by a later frame.
GDB currently incorrectly mishandles these partially-optimized-out
values.

Even though we can usually tell from the debug info whether a local or
global is optimized out, handling the case of a local living in a
register that was not saved in a frame requires fetching the variable.
GDB also needs to fetch a value to tell whether parts of it are
"<unavailable>".  Given this, it's not worth it to try to avoid
fetching lazy optimized-out values based on debug info alone.

So this patch makes GDB track which chunks of a value's contents are
optimized out like it tracks <unavailable> contents.  That is, it
makes value->optimized_out be a bit range vector instead of a boolean,
and removes the struct lval_funcs check_validity and check_any_valid
hooks.

Unlike Andrew's series which this is based on (at
https://sourceware.org/ml/gdb-patches/2013-08/msg00300.html, note some
pieces have gone in since), this doesn't merge optimized out and
unavailable contents validity/availability behind a single interface,
nor does it merge the bit range vectors themselves (at least yet).
While it may be desirable to have a single entry point that returns
existence of contents irrespective of what may make them
invalid/unavailable, several places want to treat optimized out /
unavailable / etc. differently, so each spot that potentially could
use it will need to be careful considered on case-by-case basis, and
best done as a separate change.

This fixes Jan's test, because value_available_contents_eq wasn't
considering optimized out value contents.  It does now, and because of
that it's been renamed to value_contents_eq.

A new intro comment is added to value.h describing "<optimized out>",
"<not saved>" and "<unavailable>" values.

gdb/
	PR symtab/14604
	PR symtab/14605
	* ada-lang.c (coerce_unspec_val_to_type): Use
	value_contents_copy_raw.
	* ada-valprint.c (val_print_packed_array_elements): Adjust.
	* c-valprint.c (c_val_print): Use value_bits_any_optimized_out.
	* cp-valprint.c (cp_print_value_fields): Let the common printing
	code handle optimized out values.
	(cp_print_value_fields_rtti): Use value_bits_any_optimized_out.
	* d-valprint.c (dynamic_array_type): Use
	value_bits_any_optimized_out.
	* dwarf2loc.c (entry_data_value_funcs): Remove check_validity and
	check_any_valid fields.
	(check_pieced_value_bits): Delete and inline ...
	(check_pieced_synthetic_pointer): ... here.
	(check_pieced_value_validity): Delete.
	(check_pieced_value_invalid): Delete.
	(pieced_value_funcs): Remove check_validity and check_any_valid
	fields.
	(read_pieced_value): Use mark_value_bits_optimized_out.
	(write_pieced_value): Switch to use
	mark_value_bytes_optimized_out.
	(dwarf2_evaluate_loc_desc_full): Copy the value contents instead
	of assuming the whole value is optimized out.
	* findvar.c (read_frame_register_value): Remove special handling
	of optimized out registers.
	(value_from_register): Use mark_value_bytes_optimized_out.
	* frame-unwind.c (frame_unwind_got_optimized): Use
	mark_value_bytes_optimized_out.
	* jv-valprint.c (java_value_print): Adjust.
	(java_print_value_fields): Let the common printing code handle
	optimized out values.
	* mips-tdep.c (mips_print_register): Remove special handling of
	optimized out registers.
	* opencl-lang.c (lval_func_check_validity): Delete.
	(lval_func_check_any_valid): Delete.
	(opencl_value_funcs): Remove check_validity and check_any_valid
	fields.
	* p-valprint.c (pascal_object_print_value_fields): Let the common
	printing code handle optimized out values.
	* stack.c (read_frame_arg): Remove special handling of optimized
	out values.  Fetch both VAL and ENTRYVAL before comparing
	contents.  Adjust to value_available_contents_eq rename.
	* valprint.c (valprint_check_validity)
	(val_print_scalar_formatted): Use value_bits_any_optimized_out.
	(val_print_array_elements): Adjust.
	* value.c (struct value) <optimized_out>: Now a VEC(range_s).
	(value_bits_any_optimized_out): New function.
	(value_entirely_covered_by_range_vector): New function, factored
	out from value_entirely_unavailable.
	(value_entirely_unavailable): Reimplement.
	(value_entirely_optimized_out): New function.
	(insert_into_bit_range_vector): New function, factored out from
	mark_value_bits_unavailable.
	(mark_value_bits_unavailable): Reimplement.
	(struct ranges_and_idx): New struct.
	(find_first_range_overlap_and_match): New function, factored out
	from value_available_contents_bits_eq.
	(value_available_contents_bits_eq): Rename to ...
	(value_contents_bits_eq): ... this.  Check both unavailable
	contents and optimized out contents.
	(value_available_contents_eq): Rename to ...
	(value_contents_eq): ... this.
	(allocate_value_lazy): Remove reference to the old optimized_out
	boolean.
	(allocate_optimized_out_value): Use
	mark_value_bytes_optimized_out.
	(require_not_optimized_out): Adjust to check whether the
	optimized_out vec is empty.
	(ranges_copy_adjusted): New function, factored out from
	value_contents_copy_raw.
	(value_contents_copy_raw): Also copy the optimized out ranges.
	Assert the destination ranges aren't optimized out.
	(value_contents_copy): Update comment, remove call to
	require_not_optimized_out.
	(value_contents_equal): Adjust to check whether the optimized_out
	vec is empty.
	(set_value_optimized_out, value_optimized_out_const): Delete.
	(mark_value_bytes_optimized_out, mark_value_bits_optimized_out):
	New functions.
	(value_entirely_optimized_out, value_bits_valid): Delete.
	(value_copy): Take a VEC copy of the 'optimized_out' field.
	(value_primitive_field): Remove special handling of optimized out.
	(value_fetch_lazy): Assert that lazy values have no unavailable
	regions.  Use value_bits_any_optimized_out.  Remove some special
	handling for optimized out values.
	* value.h: Add intro comment about <optimized out> and
	<unavailable>.
	(struct lval_funcs): Remove check_validity and check_any_valid
	fields.
	(set_value_optimized_out, value_optimized_out_const): Remove.
	(mark_value_bytes_optimized_out, mark_value_bits_optimized_out):
	New declarations.
	(value_bits_any_optimized_out): New declaration.
	(value_bits_valid): Delete declaration.
	(value_available_contents_eq): Rename to ...
	(value_contents_eq): ... this, and extend comments.

gdb/testsuite/
	PR symtab/14604
	PR symtab/14605
	* gdb.dwarf2/dw2-op-out-param.exp: Remove kfail branches and use
	gdb_test.
This commit is contained in:
Pedro Alves 2014-08-20 00:07:40 +01:00
parent 6694c4110a
commit 9a0dc9e369
19 changed files with 552 additions and 448 deletions

View File

@ -1,3 +1,104 @@
2014-08-19 Andrew Burgess <aburgess@broadcom.com>
Pedro Alves <palves@redhat.com>
PR symtab/14604
PR symtab/14605
* ada-lang.c (coerce_unspec_val_to_type): Use
value_contents_copy_raw.
* ada-valprint.c (val_print_packed_array_elements): Adjust.
* c-valprint.c (c_val_print): Use value_bits_any_optimized_out.
* cp-valprint.c (cp_print_value_fields): Let the common printing
code handle optimized out values.
(cp_print_value_fields_rtti): Use value_bits_any_optimized_out.
* d-valprint.c (dynamic_array_type): Use
value_bits_any_optimized_out.
* dwarf2loc.c (entry_data_value_funcs): Remove check_validity and
check_any_valid fields.
(check_pieced_value_bits): Delete and inline ...
(check_pieced_synthetic_pointer): ... here.
(check_pieced_value_validity): Delete.
(check_pieced_value_invalid): Delete.
(pieced_value_funcs): Remove check_validity and check_any_valid
fields.
(read_pieced_value): Use mark_value_bits_optimized_out.
(write_pieced_value): Switch to use
mark_value_bytes_optimized_out.
(dwarf2_evaluate_loc_desc_full): Copy the value contents instead
of assuming the whole value is optimized out.
* findvar.c (read_frame_register_value): Remove special handling
of optimized out registers.
(value_from_register): Use mark_value_bytes_optimized_out.
* frame-unwind.c (frame_unwind_got_optimized): Use
mark_value_bytes_optimized_out.
* jv-valprint.c (java_value_print): Adjust.
(java_print_value_fields): Let the common printing code handle
optimized out values.
* mips-tdep.c (mips_print_register): Remove special handling of
optimized out registers.
* opencl-lang.c (lval_func_check_validity): Delete.
(lval_func_check_any_valid): Delete.
(opencl_value_funcs): Remove check_validity and check_any_valid
fields.
* p-valprint.c (pascal_object_print_value_fields): Let the common
printing code handle optimized out values.
* stack.c (read_frame_arg): Remove special handling of optimized
out values. Fetch both VAL and ENTRYVAL before comparing
contents. Adjust to value_available_contents_eq rename.
* valprint.c (valprint_check_validity)
(val_print_scalar_formatted): Use value_bits_any_optimized_out.
(val_print_array_elements): Adjust.
* value.c (struct value) <optimized_out>: Now a VEC(range_s).
(value_bits_any_optimized_out): New function.
(value_entirely_covered_by_range_vector): New function, factored
out from value_entirely_unavailable.
(value_entirely_unavailable): Reimplement.
(value_entirely_optimized_out): New function.
(insert_into_bit_range_vector): New function, factored out from
mark_value_bits_unavailable.
(mark_value_bits_unavailable): Reimplement.
(struct ranges_and_idx): New struct.
(find_first_range_overlap_and_match): New function, factored out
from value_available_contents_bits_eq.
(value_available_contents_bits_eq): Rename to ...
(value_contents_bits_eq): ... this. Check both unavailable
contents and optimized out contents.
(value_available_contents_eq): Rename to ...
(value_contents_eq): ... this.
(allocate_value_lazy): Remove reference to the old optimized_out
boolean.
(allocate_optimized_out_value): Use
mark_value_bytes_optimized_out.
(require_not_optimized_out): Adjust to check whether the
optimized_out vec is empty.
(ranges_copy_adjusted): New function, factored out from
value_contents_copy_raw.
(value_contents_copy_raw): Also copy the optimized out ranges.
Assert the destination ranges aren't optimized out.
(value_contents_copy): Update comment, remove call to
require_not_optimized_out.
(value_contents_equal): Adjust to check whether the optimized_out
vec is empty.
(set_value_optimized_out, value_optimized_out_const): Delete.
(mark_value_bytes_optimized_out, mark_value_bits_optimized_out):
New functions.
(value_entirely_optimized_out, value_bits_valid): Delete.
(value_copy): Take a VEC copy of the 'optimized_out' field.
(value_primitive_field): Remove special handling of optimized out.
(value_fetch_lazy): Assert that lazy values have no unavailable
regions. Use value_bits_any_optimized_out. Remove some special
handling for optimized out values.
* value.h: Add intro comment about <optimized out> and
<unavailable>.
(struct lval_funcs): Remove check_validity and check_any_valid
fields.
(set_value_optimized_out, value_optimized_out_const): Remove.
(mark_value_bytes_optimized_out, mark_value_bits_optimized_out):
New declarations.
(value_bits_any_optimized_out): New declaration.
(value_bits_valid): Delete declaration.
(value_available_contents_eq): Rename to ...
(value_contents_eq): ... this, and extend comments.
2014-08-19 Jan Kratochvil <jan.kratochvil@redhat.com>
Fix -fsanitize=address on unreadable inferior strings.

View File

@ -679,14 +679,12 @@ coerce_unspec_val_to_type (struct value *val, struct type *type)
else
{
result = allocate_value (type);
memcpy (value_contents_raw (result), value_contents (val),
TYPE_LENGTH (type));
value_contents_copy_raw (result, 0, val, 0, TYPE_LENGTH (type));
}
set_value_component_location (result, val);
set_value_bitsize (result, value_bitsize (val));
set_value_bitpos (result, value_bitpos (val));
set_value_address (result, value_address (val));
set_value_optimized_out (result, value_optimized_out_const (val));
return result;
}
}

View File

@ -185,9 +185,9 @@ val_print_packed_array_elements (struct type *type, const gdb_byte *valaddr,
(i * bitsize) / HOST_CHAR_BIT,
(i * bitsize) % HOST_CHAR_BIT,
bitsize, elttype);
if (!value_available_contents_eq (v0, value_embedded_offset (v0),
v1, value_embedded_offset (v1),
eltlen))
if (!value_contents_eq (v0, value_embedded_offset (v0),
v1, value_embedded_offset (v1),
eltlen))
break;
}

View File

@ -172,9 +172,9 @@ c_val_print (struct type *type, const gdb_byte *valaddr,
options->format)
&& value_bytes_available (original_value, embedded_offset,
TYPE_LENGTH (type))
&& value_bits_valid (original_value,
TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
&& !value_bits_any_optimized_out (original_value,
TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
{
int force_ellipses = 0;

View File

@ -293,12 +293,6 @@ cp_print_value_fields (struct type *type, struct type *real_type,
{
fputs_filtered (_("<synthetic pointer>"), stream);
}
else if (!value_bits_valid (val,
TYPE_FIELD_BITPOS (type, i),
TYPE_FIELD_BITSIZE (type, i)))
{
val_print_optimized_out (val, stream);
}
else
{
struct value_print_options opts = *options;
@ -433,8 +427,9 @@ cp_print_value_fields_rtti (struct type *type,
/* We require all bits to be valid in order to attempt a
conversion. */
if (value_bits_valid (val, TARGET_CHAR_BIT * offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
if (!value_bits_any_optimized_out (val,
TARGET_CHAR_BIT * offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
{
struct value *value;
int full, top, using_enc;

View File

@ -38,8 +38,9 @@ dynamic_array_type (struct type *type, const gdb_byte *valaddr,
&& TYPE_CODE (TYPE_FIELD_TYPE (type, 0)) == TYPE_CODE_INT
&& strcmp (TYPE_FIELD_NAME (type, 0), "length") == 0
&& strcmp (TYPE_FIELD_NAME (type, 1), "ptr") == 0
&& value_bits_valid (val, TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
&& !value_bits_any_optimized_out (val,
TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
{
CORE_ADDR addr;
struct type *elttype;

View File

@ -1300,8 +1300,6 @@ static const struct lval_funcs entry_data_value_funcs =
{
NULL, /* read */
NULL, /* write */
NULL, /* check_validity */
NULL, /* check_any_valid */
NULL, /* indirect */
entry_data_value_coerce_ref,
NULL, /* check_synthetic_pointer */
@ -1710,7 +1708,7 @@ read_pieced_value (struct value *v)
memset (buffer, 0, this_size);
if (optim)
set_value_optimized_out (v, 1);
mark_value_bits_optimized_out (v, offset, this_size_bits);
if (unavail)
mark_value_bits_unavailable (v, offset, this_size_bits);
}
@ -1770,7 +1768,7 @@ read_pieced_value (struct value *v)
break;
case DWARF_VALUE_OPTIMIZED_OUT:
set_value_optimized_out (v, 1);
mark_value_bits_optimized_out (v, offset, this_size_bits);
break;
default:
@ -1808,7 +1806,7 @@ write_pieced_value (struct value *to, struct value *from)
if (frame == NULL)
{
set_value_optimized_out (to, 1);
mark_value_bytes_optimized_out (to, 0, TYPE_LENGTH (value_type (to)));
return;
}
@ -1940,7 +1938,7 @@ write_pieced_value (struct value *to, struct value *from)
source_buffer, this_size);
break;
default:
set_value_optimized_out (to, 1);
mark_value_bytes_optimized_out (to, 0, TYPE_LENGTH (value_type (to)));
break;
}
offset += this_size_bits;
@ -1949,24 +1947,16 @@ write_pieced_value (struct value *to, struct value *from)
do_cleanups (cleanup);
}
/* A helper function that checks bit validity in a pieced value.
CHECK_FOR indicates the kind of validity checking.
DWARF_VALUE_MEMORY means to check whether any bit is valid.
DWARF_VALUE_OPTIMIZED_OUT means to check whether any bit is
optimized out.
DWARF_VALUE_IMPLICIT_POINTER means to check whether the bits are an
implicit pointer. */
/* An implementation of an lval_funcs method to see whether a value is
a synthetic pointer. */
static int
check_pieced_value_bits (const struct value *value, int bit_offset,
int bit_length,
enum dwarf_value_location check_for)
check_pieced_synthetic_pointer (const struct value *value, int bit_offset,
int bit_length)
{
struct piece_closure *c
= (struct piece_closure *) value_computed_closure (value);
int i;
int validity = (check_for == DWARF_VALUE_MEMORY
|| check_for == DWARF_VALUE_IMPLICIT_POINTER);
bit_offset += 8 * value_offset (value);
if (value_bitsize (value))
@ -1991,52 +1981,11 @@ check_pieced_value_bits (const struct value *value, int bit_offset,
else
bit_length -= this_size_bits;
if (check_for == DWARF_VALUE_IMPLICIT_POINTER)
{
if (p->location != DWARF_VALUE_IMPLICIT_POINTER)
return 0;
}
else if (p->location == DWARF_VALUE_OPTIMIZED_OUT
|| p->location == DWARF_VALUE_IMPLICIT_POINTER)
{
if (validity)
return 0;
}
else
{
if (!validity)
return 1;
}
if (p->location != DWARF_VALUE_IMPLICIT_POINTER)
return 0;
}
return validity;
}
static int
check_pieced_value_validity (const struct value *value, int bit_offset,
int bit_length)
{
return check_pieced_value_bits (value, bit_offset, bit_length,
DWARF_VALUE_MEMORY);
}
static int
check_pieced_value_invalid (const struct value *value)
{
return check_pieced_value_bits (value, 0,
8 * TYPE_LENGTH (value_type (value)),
DWARF_VALUE_OPTIMIZED_OUT);
}
/* An implementation of an lval_funcs method to see whether a value is
a synthetic pointer. */
static int
check_pieced_synthetic_pointer (const struct value *value, int bit_offset,
int bit_length)
{
return check_pieced_value_bits (value, bit_offset, bit_length,
DWARF_VALUE_IMPLICIT_POINTER);
return 1;
}
/* A wrapper function for get_frame_address_in_block. */
@ -2185,8 +2134,6 @@ free_pieced_value_closure (struct value *v)
static const struct lval_funcs pieced_value_funcs = {
read_pieced_value,
write_pieced_value,
check_pieced_value_validity,
check_pieced_value_invalid,
indirect_pieced_value,
NULL, /* coerce_ref */
check_pieced_synthetic_pointer,
@ -2316,6 +2263,8 @@ dwarf2_evaluate_loc_desc_full (struct type *type, struct frame_info *frame,
retval = value_from_register (type, gdb_regnum, frame);
if (value_optimized_out (retval))
{
struct value *tmp;
/* This means the register has undefined value / was
not saved. As we're computing the location of some
variable etc. in the program, not a value for
@ -2323,7 +2272,9 @@ dwarf2_evaluate_loc_desc_full (struct type *type, struct frame_info *frame,
generic optimized out value instead, so that we show
<optimized out> instead of <not saved>. */
do_cleanups (value_chain);
retval = allocate_optimized_out_value (type);
tmp = allocate_value (type);
value_contents_copy (tmp, 0, retval, 0, TYPE_LENGTH (type));
retval = tmp;
}
}
break;

View File

@ -679,12 +679,6 @@ read_frame_register_value (struct value *value, struct frame_info *frame)
struct value *regval = get_frame_register_value (frame, regnum);
int reg_len = TYPE_LENGTH (value_type (regval)) - reg_offset;
if (value_optimized_out (regval))
{
set_value_optimized_out (value, 1);
break;
}
/* If the register length is larger than the number of bytes
remaining to copy, then only copy the appropriate bytes. */
if (reg_len > len)
@ -730,7 +724,7 @@ value_from_register (struct type *type, int regnum, struct frame_info *frame)
if (!ok)
{
if (optim)
set_value_optimized_out (v, 1);
mark_value_bytes_optimized_out (v, 0, TYPE_LENGTH (type));
if (unavail)
mark_value_bytes_unavailable (v, 0, TYPE_LENGTH (type));
}

View File

@ -202,7 +202,7 @@ frame_unwind_got_optimized (struct frame_info *frame, int regnum)
"<not saved>". */
val = allocate_value_lazy (type);
set_value_lazy (val, 0);
set_value_optimized_out (val, 1);
mark_value_bytes_optimized_out (val, 0, TYPE_LENGTH (type));
VALUE_LVAL (val) = lval_register;
VALUE_REGNUM (val) = regnum;
VALUE_FRAME_ID (val) = get_frame_id (frame);

View File

@ -181,10 +181,10 @@ java_value_print (struct value *val, struct ui_file *stream,
set_value_offset (next_v, value_offset (next_v)
+ TYPE_LENGTH (el_type));
value_fetch_lazy (next_v);
if (!(value_available_contents_eq
(v, value_embedded_offset (v),
next_v, value_embedded_offset (next_v),
TYPE_LENGTH (el_type))))
if (!value_contents_eq (v, value_embedded_offset (v),
next_v,
value_embedded_offset (next_v),
TYPE_LENGTH (el_type)))
break;
}
@ -391,11 +391,6 @@ java_print_value_fields (struct type *type, const gdb_byte *valaddr,
{
fputs_filtered (_("<synthetic pointer>"), stream);
}
else if (!value_bits_valid (val, TYPE_FIELD_BITPOS (type, i),
TYPE_FIELD_BITSIZE (type, i)))
{
val_print_optimized_out (val, stream);
}
else
{
struct value_print_options opts;

View File

@ -6194,12 +6194,6 @@ mips_print_register (struct ui_file *file, struct frame_info *frame,
}
val = get_frame_register_value (frame, regnum);
if (value_optimized_out (val))
{
fprintf_filtered (file, "%s: [Invalid]",
gdbarch_register_name (gdbarch, regnum));
return;
}
fputs_filtered (gdbarch_register_name (gdbarch, regnum), file);

View File

@ -238,58 +238,6 @@ lval_func_write (struct value *v, struct value *fromval)
value_free_to_mark (mark);
}
/* Return nonzero if all bits in V within OFFSET and LENGTH are valid. */
static int
lval_func_check_validity (const struct value *v, int offset, int length)
{
struct lval_closure *c = (struct lval_closure *) value_computed_closure (v);
/* Size of the target type in bits. */
int elsize =
TYPE_LENGTH (TYPE_TARGET_TYPE (check_typedef (value_type (c->val)))) * 8;
int startrest = offset % elsize;
int start = offset / elsize;
int endrest = (offset + length) % elsize;
int end = (offset + length) / elsize;
int i;
if (endrest)
end++;
if (end > c->n)
return 0;
for (i = start; i < end; i++)
{
int comp_offset = (i == start) ? startrest : 0;
int comp_length = (i == end) ? endrest : elsize;
if (!value_bits_valid (c->val, c->indices[i] * elsize + comp_offset,
comp_length))
return 0;
}
return 1;
}
/* Return nonzero if any bit in V is valid. */
static int
lval_func_check_any_valid (const struct value *v)
{
struct lval_closure *c = (struct lval_closure *) value_computed_closure (v);
/* Size of the target type in bits. */
int elsize =
TYPE_LENGTH (TYPE_TARGET_TYPE (check_typedef (value_type (c->val)))) * 8;
int i;
for (i = 0; i < c->n; i++)
if (value_bits_valid (c->val, c->indices[i] * elsize, elsize))
return 1;
return 0;
}
/* Return nonzero if bits in V from OFFSET and LENGTH represent a
synthetic pointer. */
@ -356,8 +304,6 @@ static const struct lval_funcs opencl_value_funcs =
{
lval_func_read,
lval_func_write,
lval_func_check_validity,
lval_func_check_any_valid,
NULL, /* indirect */
NULL, /* coerce_ref */
lval_func_check_synthetic_pointer,

View File

@ -627,11 +627,6 @@ pascal_object_print_value_fields (struct type *type, const gdb_byte *valaddr,
{
fputs_filtered (_("<synthetic pointer>"), stream);
}
else if (!value_bits_valid (val, TYPE_FIELD_BITPOS (type, i),
TYPE_FIELD_BITSIZE (type, i)))
{
val_print_optimized_out (val, stream);
}
else
{
struct value_print_options opts = *options;

View File

@ -382,9 +382,12 @@ read_frame_arg (struct symbol *sym, struct frame_info *frame,
{
struct type *type = value_type (val);
if (!value_optimized_out (val)
&& value_available_contents_eq (val, 0, entryval, 0,
TYPE_LENGTH (type)))
if (value_lazy (val))
value_fetch_lazy (val);
if (value_lazy (entryval))
value_fetch_lazy (entryval);
if (value_contents_eq (val, 0, entryval, 0, TYPE_LENGTH (type)))
{
/* Initialize it just to avoid a GCC false warning. */
struct value *val_deref = NULL, *entryval_deref;
@ -410,11 +413,9 @@ read_frame_arg (struct symbol *sym, struct frame_info *frame,
/* If the reference addresses match but dereferenced
content does not match print them. */
if (val != val_deref
&& !value_optimized_out (val_deref)
&& !value_optimized_out (entryval_deref)
&& value_available_contents_eq (val_deref, 0,
entryval_deref, 0,
TYPE_LENGTH (type_deref)))
&& value_contents_eq (val_deref, 0,
entryval_deref, 0,
TYPE_LENGTH (type_deref)))
val_equal = 1;
}

View File

@ -1,3 +1,11 @@
2014-08-19 Andrew Burgess <aburgess@broadcom.com>
Pedro Alves <palves@redhat.com>
PR symtab/14604
PR symtab/14605
* gdb.dwarf2/dw2-op-out-param.exp: Remove kfail branches and use
gdb_test.
2014-08-19 Pedro Alves <palves@redhat.com>
* gdb.base/watchpoint-hw-hit-once.c (main): Update comment.

View File

@ -50,36 +50,12 @@ gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?int_param_single_
# (2) struct_param_single_reg_loc
gdb_continue_to_breakpoint "Stop in breakpt for struct_param_single_reg_loc"
set test "Backtrace for test struct_param_single_reg_loc"
gdb_test_multiple "bt" "$test" {
-re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?struct_param_single_reg_loc \\(operand0={a = 0xdeadbe00deadbe01, b = <optimized out>}, operand1={a = <optimized out>, b = 0xdeadbe04deadbe05}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
xpass $test
}
-re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?struct_param_single_reg_loc \\(operand0=<optimized out>, operand1=<optimized out>, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
kfail "symtab/14604" $test
}
}
gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?struct_param_single_reg_loc \\(operand0={a = 0xdeadbe00deadbe01, b = <optimized out>}, operand1={a = <optimized out>, b = 0xdeadbe04deadbe05}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)" "Backtrace for test struct_param_single_reg_loc"
# (3) struct_param_two_reg_pieces
gdb_continue_to_breakpoint "Stop in breakpt for struct_param_two_reg_pieces"
set test "Backtrace for test struct_param_two_reg_pieces"
gdb_test_multiple "bt" "$test" {
-re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?struct_param_two_reg_pieces \\(operand0={a = 0xdeadbe04deadbe05, b = <optimized out>}, operand1={a = <optimized out>, b = 0xdeadbe00deadbe01}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
xpass $test
}
-re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?struct_param_two_reg_pieces \\(operand0=.*, operand1=.*, operand2=.*\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
kfail "symtab/14605" $test
}
}
gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?struct_param_two_reg_pieces \\(operand0={a = 0xdeadbe04deadbe05, b = <optimized out>}, operand1={a = <optimized out>, b = 0xdeadbe00deadbe01}, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)" "Backtrace for test struct_param_two_reg_pieces"
# (4) int_param_two_reg_pieces
gdb_continue_to_breakpoint "Stop in breakpt for int_param_two_reg_pieces"
set test "Backtrace for test int_param_two_reg_pieces"
gdb_test_multiple "bt" "$test" {
-re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?int_param_two_reg_pieces \\(operand0=<optimized out>, operand1=<optimized out>, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
xpass $test
}
-re "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?int_param_two_reg_pieces \\(operand0=.*, operand1=.*, operand2=.*\\)\r\n#2 ($hex in )?main \\(\\)\r\n$gdb_prompt $" {
kfail "symtab/14605" $test
}
}
gdb_test "bt" "#0 ($hex in )?breakpt \\(\\)\r\n#1 ($hex in )?int_param_two_reg_pieces \\(operand0=<optimized out>, operand1=<optimized out>, operand2=<optimized out>\\)\r\n#2 ($hex in )?main \\(\\)" "Backtrace for test int_param_two_reg_pieces"

View File

@ -308,8 +308,9 @@ valprint_check_validity (struct ui_file *stream,
&& TYPE_CODE (type) != TYPE_CODE_STRUCT
&& TYPE_CODE (type) != TYPE_CODE_ARRAY)
{
if (!value_bits_valid (val, TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
if (value_bits_any_optimized_out (val,
TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
{
val_print_optimized_out (val, stream);
return 0;
@ -980,8 +981,9 @@ val_print_scalar_formatted (struct type *type,
/* A scalar object that does not have all bits available can't be
printed, because all bits contribute to its representation. */
if (!value_bits_valid (val, TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
if (value_bits_any_optimized_out (val,
TARGET_CHAR_BIT * embedded_offset,
TARGET_CHAR_BIT * TYPE_LENGTH (type)))
val_print_optimized_out (val, stream);
else if (!value_bytes_available (val, embedded_offset, TYPE_LENGTH (type)))
val_print_unavailable (stream);
@ -1682,12 +1684,12 @@ val_print_array_elements (struct type *type,
if (options->repeat_count_threshold < UINT_MAX)
{
while (rep1 < len
&& value_available_contents_eq (val,
embedded_offset + i * eltlen,
val,
(embedded_offset
+ rep1 * eltlen),
eltlen))
&& value_contents_eq (val,
embedded_offset + i * eltlen,
val,
(embedded_offset
+ rep1 * eltlen),
eltlen))
{
++reps;
++rep1;

View File

@ -195,15 +195,6 @@ struct value
reset, be sure to consider this use as well! */
unsigned int lazy : 1;
/* If nonzero, this is the value of a variable that does not
actually exist in the program. If nonzero, and LVAL is
lval_register, this is a register ($pc, $sp, etc., never a
program variable) that has not been saved in the frame. All
optimized-out values are treated pretty much the same, except
registers have a different string representation and related
error strings. */
unsigned int optimized_out : 1;
/* If value is a variable, is it initialized or not. */
unsigned int initialized : 1;
@ -334,9 +325,20 @@ struct value
/* Unavailable ranges in CONTENTS. We mark unavailable ranges,
rather than available, since the common and default case is for a
value to be available. This is filled in at value read time. The
unavailable ranges are tracked in bits. */
value to be available. This is filled in at value read time.
The unavailable ranges are tracked in bits. Note that a contents
bit that has been optimized out doesn't really exist in the
program, so it can't be marked unavailable either. */
VEC(range_s) *unavailable;
/* Likewise, but for optimized out contents (a chunk of the value of
a variable that does not actually exist in the program). If LVAL
is lval_register, this is a register ($pc, $sp, etc., never a
program variable) that has not been saved in the frame. Not
saved registers and optimized-out program variables values are
treated pretty much the same, except not-saved registers have a
different string representation and related error strings. */
VEC(range_s) *optimized_out;
};
int
@ -355,6 +357,14 @@ value_bytes_available (const struct value *value, int offset, int length)
length * TARGET_CHAR_BIT);
}
int
value_bits_any_optimized_out (const struct value *value, int bit_offset, int bit_length)
{
gdb_assert (!value->lazy);
return ranges_contain (value->optimized_out, bit_offset, bit_length);
}
int
value_entirely_available (struct value *value)
{
@ -368,17 +378,22 @@ value_entirely_available (struct value *value)
return 0;
}
int
value_entirely_unavailable (struct value *value)
/* Returns true if VALUE is entirely covered by RANGES. If the value
is lazy, it'll be read now. Note that RANGE is a pointer to
pointer because reading the value might change *RANGE. */
static int
value_entirely_covered_by_range_vector (struct value *value,
VEC(range_s) **ranges)
{
/* We can only tell whether the whole value is available when we try
to read it. */
/* We can only tell whether the whole value is optimized out /
unavailable when we try to read it. */
if (value->lazy)
value_fetch_lazy (value);
if (VEC_length (range_s, value->unavailable) == 1)
if (VEC_length (range_s, *ranges) == 1)
{
struct range *t = VEC_index (range_s, value->unavailable, 0);
struct range *t = VEC_index (range_s, *ranges, 0);
if (t->offset == 0
&& t->length == (TARGET_CHAR_BIT
@ -389,8 +404,23 @@ value_entirely_unavailable (struct value *value)
return 0;
}
void
mark_value_bits_unavailable (struct value *value, int offset, int length)
int
value_entirely_unavailable (struct value *value)
{
return value_entirely_covered_by_range_vector (value, &value->unavailable);
}
int
value_entirely_optimized_out (struct value *value)
{
return value_entirely_covered_by_range_vector (value, &value->optimized_out);
}
/* Insert into the vector pointed to by VECTORP the bit range starting of
OFFSET bits, and extending for the next LENGTH bits. */
static void
insert_into_bit_range_vector (VEC(range_s) **vectorp, int offset, int length)
{
range_s newr;
int i;
@ -481,10 +511,10 @@ mark_value_bits_unavailable (struct value *value, int offset, int length)
*/
i = VEC_lower_bound (range_s, value->unavailable, &newr, range_lessthan);
i = VEC_lower_bound (range_s, *vectorp, &newr, range_lessthan);
if (i > 0)
{
struct range *bef = VEC_index (range_s, value->unavailable, i - 1);
struct range *bef = VEC_index (range_s, *vectorp, i - 1);
if (ranges_overlap (bef->offset, bef->length, offset, length))
{
@ -505,18 +535,18 @@ mark_value_bits_unavailable (struct value *value, int offset, int length)
else
{
/* #3 */
VEC_safe_insert (range_s, value->unavailable, i, &newr);
VEC_safe_insert (range_s, *vectorp, i, &newr);
}
}
else
{
/* #4 */
VEC_safe_insert (range_s, value->unavailable, i, &newr);
VEC_safe_insert (range_s, *vectorp, i, &newr);
}
/* Check whether the ranges following the one we've just added or
touched can be folded in (#5 above). */
if (i + 1 < VEC_length (range_s, value->unavailable))
if (i + 1 < VEC_length (range_s, *vectorp))
{
struct range *t;
struct range *r;
@ -524,11 +554,11 @@ mark_value_bits_unavailable (struct value *value, int offset, int length)
int next = i + 1;
/* Get the range we just touched. */
t = VEC_index (range_s, value->unavailable, i);
t = VEC_index (range_s, *vectorp, i);
removed = 0;
i = next;
for (; VEC_iterate (range_s, value->unavailable, i, r); i++)
for (; VEC_iterate (range_s, *vectorp, i, r); i++)
if (r->offset <= t->offset + t->length)
{
ULONGEST l, h;
@ -550,10 +580,16 @@ mark_value_bits_unavailable (struct value *value, int offset, int length)
}
if (removed != 0)
VEC_block_remove (range_s, value->unavailable, next, removed);
VEC_block_remove (range_s, *vectorp, next, removed);
}
}
void
mark_value_bits_unavailable (struct value *value, int offset, int length)
{
insert_into_bit_range_vector (&value->unavailable, offset, length);
}
void
mark_value_bytes_unavailable (struct value *value, int offset, int length)
{
@ -682,48 +718,53 @@ memcmp_with_bit_offsets (const gdb_byte *ptr1, size_t offset1_bits,
return 0;
}
/* Helper function for value_available_contents_eq. The only difference is
that this function is bit rather than byte based.
/* Helper struct for find_first_range_overlap_and_match and
value_contents_bits_eq. Keep track of which slot of a given ranges
vector have we last looked at. */
Compare LENGTH bits of VAL1's contents starting at OFFSET1 bits with
LENGTH bits of VAL2's contents starting at OFFSET2 bits. Return true
if the available bits match. */
struct ranges_and_idx
{
/* The ranges. */
VEC(range_s) *ranges;
/* The range we've last found in RANGES. Given ranges are sorted,
we can start the next lookup here. */
int idx;
};
/* Helper function for value_contents_bits_eq. Compare LENGTH bits of
RP1's ranges starting at OFFSET1 bits with LENGTH bits of RP2's
ranges starting at OFFSET2 bits. Return true if the ranges match
and fill in *L and *H with the overlapping window relative to
(both) OFFSET1 or OFFSET2. */
static int
value_available_contents_bits_eq (const struct value *val1, int offset1,
const struct value *val2, int offset2,
int length)
find_first_range_overlap_and_match (struct ranges_and_idx *rp1,
struct ranges_and_idx *rp2,
int offset1, int offset2,
int length, ULONGEST *l, ULONGEST *h)
{
int idx1 = 0, idx2 = 0;
rp1->idx = find_first_range_overlap (rp1->ranges, rp1->idx,
offset1, length);
rp2->idx = find_first_range_overlap (rp2->ranges, rp2->idx,
offset2, length);
/* See function description in value.h. */
gdb_assert (!val1->lazy && !val2->lazy);
while (length > 0)
if (rp1->idx == -1 && rp2->idx == -1)
{
*l = length;
*h = length;
return 1;
}
else if (rp1->idx == -1 || rp2->idx == -1)
return 0;
else
{
range_s *r1, *r2;
ULONGEST l1, h1;
ULONGEST l2, h2;
idx1 = find_first_range_overlap (val1->unavailable, idx1,
offset1, length);
idx2 = find_first_range_overlap (val2->unavailable, idx2,
offset2, length);
/* The usual case is for both values to be completely available. */
if (idx1 == -1 && idx2 == -1)
return (memcmp_with_bit_offsets (val1->contents, offset1,
val2->contents, offset2,
length) == 0);
/* The contents only match equal if the available set matches as
well. */
else if (idx1 == -1 || idx2 == -1)
return 0;
gdb_assert (idx1 != -1 && idx2 != -1);
r1 = VEC_index (range_s, val1->unavailable, idx1);
r2 = VEC_index (range_s, val2->unavailable, idx2);
r1 = VEC_index (range_s, rp1->ranges, rp1->idx);
r2 = VEC_index (range_s, rp2->ranges, rp2->idx);
/* Get the unavailable windows intersected by the incoming
ranges. The first and last ranges that overlap the argument
@ -732,7 +773,7 @@ value_available_contents_bits_eq (const struct value *val1, int offset1,
h1 = min (offset1 + length, r1->offset + r1->length);
l2 = max (offset2, r2->offset);
h2 = min (offset2 + length, r2->offset + r2->length);
h2 = min (offset2 + length, offset2 + r2->length);
/* Make them relative to the respective start offsets, so we can
compare them for equality. */
@ -742,31 +783,93 @@ value_available_contents_bits_eq (const struct value *val1, int offset1,
l2 -= offset2;
h2 -= offset2;
/* Different availability, no match. */
/* Different ranges, no match. */
if (l1 != l2 || h1 != h2)
return 0;
/* Compare the _available_ contents. */
*h = h1;
*l = l1;
return 1;
}
}
/* Helper function for value_contents_eq. The only difference is that
this function is bit rather than byte based.
Compare LENGTH bits of VAL1's contents starting at OFFSET1 bits
with LENGTH bits of VAL2's contents starting at OFFSET2 bits.
Return true if the available bits match. */
static int
value_contents_bits_eq (const struct value *val1, int offset1,
const struct value *val2, int offset2,
int length)
{
/* Each array element corresponds to a ranges source (unavailable,
optimized out). '1' is for VAL1, '2' for VAL2. */
struct ranges_and_idx rp1[2], rp2[2];
/* See function description in value.h. */
gdb_assert (!val1->lazy && !val2->lazy);
/* We shouldn't be trying to compare past the end of the values. */
gdb_assert (offset1 + length
<= TYPE_LENGTH (val1->enclosing_type) * TARGET_CHAR_BIT);
gdb_assert (offset2 + length
<= TYPE_LENGTH (val2->enclosing_type) * TARGET_CHAR_BIT);
memset (&rp1, 0, sizeof (rp1));
memset (&rp2, 0, sizeof (rp2));
rp1[0].ranges = val1->unavailable;
rp2[0].ranges = val2->unavailable;
rp1[1].ranges = val1->optimized_out;
rp2[1].ranges = val2->optimized_out;
while (length > 0)
{
ULONGEST l, h;
int i;
for (i = 0; i < 2; i++)
{
ULONGEST l_tmp, h_tmp;
/* The contents only match equal if the invalid/unavailable
contents ranges match as well. */
if (!find_first_range_overlap_and_match (&rp1[i], &rp2[i],
offset1, offset2, length,
&l_tmp, &h_tmp))
return 0;
/* We're interested in the lowest/first range found. */
if (i == 0 || l_tmp < l)
{
l = l_tmp;
h = h_tmp;
}
}
/* Compare the available/valid contents. */
if (memcmp_with_bit_offsets (val1->contents, offset1,
val2->contents, offset2, l1) != 0)
val2->contents, offset2, l) != 0)
return 0;
length -= h1;
offset1 += h1;
offset2 += h1;
length -= h;
offset1 += h;
offset2 += h;
}
return 1;
}
int
value_available_contents_eq (const struct value *val1, int offset1,
const struct value *val2, int offset2,
int length)
value_contents_eq (const struct value *val1, int offset1,
const struct value *val2, int offset2,
int length)
{
return value_available_contents_bits_eq (val1, offset1 * TARGET_CHAR_BIT,
val2, offset2 * TARGET_CHAR_BIT,
length * TARGET_CHAR_BIT);
return value_contents_bits_eq (val1, offset1 * TARGET_CHAR_BIT,
val2, offset2 * TARGET_CHAR_BIT,
length * TARGET_CHAR_BIT);
}
/* Prototypes for local functions. */
@ -834,7 +937,6 @@ allocate_value_lazy (struct type *type)
val->bitsize = 0;
VALUE_REGNUM (val) = -1;
val->lazy = 1;
val->optimized_out = 0;
val->embedded_offset = 0;
val->pointed_to_offset = 0;
val->modifiable = 1;
@ -903,11 +1005,8 @@ allocate_optimized_out_value (struct type *type)
{
struct value *retval = allocate_value_lazy (type);
set_value_optimized_out (retval, 1);
/* FIXME: we should be able to avoid allocating the value's contents
buffer, but value_available_contents_bits_eq can't handle
that. */
/* set_value_lazy (retval, 0); */
mark_value_bytes_optimized_out (retval, 0, TYPE_LENGTH (type));
set_value_lazy (retval, 0);
return retval;
}
@ -1055,7 +1154,7 @@ error_value_optimized_out (void)
static void
require_not_optimized_out (const struct value *value)
{
if (value->optimized_out)
if (!VEC_empty (range_s, value->optimized_out))
{
if (value->lval == lval_register)
error (_("register has not been saved in frame"));
@ -1095,6 +1194,31 @@ value_contents_all (struct value *value)
return result;
}
/* Copy ranges in SRC_RANGE that overlap [SRC_BIT_OFFSET,
SRC_BIT_OFFSET+BIT_LENGTH) ranges into *DST_RANGE, adjusted. */
static void
ranges_copy_adjusted (VEC (range_s) **dst_range, int dst_bit_offset,
VEC (range_s) *src_range, int src_bit_offset,
int bit_length)
{
range_s *r;
int i;
for (i = 0; VEC_iterate (range_s, src_range, i, r); i++)
{
ULONGEST h, l;
l = max (r->offset, src_bit_offset);
h = min (r->offset + r->length, src_bit_offset + bit_length);
if (l < h)
insert_into_bit_range_vector (dst_range,
dst_bit_offset + (l - src_bit_offset),
h - l);
}
}
/* Copy LENGTH bytes of SRC value's (all) contents
(value_contents_all) starting at SRC_OFFSET, into DST value's (all)
contents, starting at DST_OFFSET. If unavailable contents are
@ -1123,6 +1247,9 @@ value_contents_copy_raw (struct value *dst, int dst_offset,
replaced. Make sure to remember to implement replacing if it
turns out actually necessary. */
gdb_assert (value_bytes_available (dst, dst_offset, length));
gdb_assert (!value_bits_any_optimized_out (dst,
TARGET_CHAR_BIT * dst_offset,
TARGET_CHAR_BIT * length));
/* Copy the data. */
memcpy (value_contents_all_raw (dst) + dst_offset,
@ -1133,18 +1260,14 @@ value_contents_copy_raw (struct value *dst, int dst_offset,
src_bit_offset = src_offset * TARGET_CHAR_BIT;
dst_bit_offset = dst_offset * TARGET_CHAR_BIT;
bit_length = length * TARGET_CHAR_BIT;
for (i = 0; VEC_iterate (range_s, src->unavailable, i, r); i++)
{
ULONGEST h, l;
l = max (r->offset, src_bit_offset);
h = min (r->offset + r->length, src_bit_offset + bit_length);
ranges_copy_adjusted (&dst->unavailable, dst_bit_offset,
src->unavailable, src_bit_offset,
bit_length);
if (l < h)
mark_value_bits_unavailable (dst,
dst_bit_offset + (l - src_bit_offset),
h - l);
}
ranges_copy_adjusted (&dst->optimized_out, dst_bit_offset,
src->optimized_out, src_bit_offset,
bit_length);
}
/* Copy LENGTH bytes of SRC value's (all) contents
@ -1152,8 +1275,7 @@ value_contents_copy_raw (struct value *dst, int dst_offset,
(all) contents, starting at DST_OFFSET. If unavailable contents
are being copied from SRC, the corresponding DST contents are
marked unavailable accordingly. DST must not be lazy. If SRC is
lazy, it will be fetched now. If SRC is not valid (is optimized
out), an error is thrown.
lazy, it will be fetched now.
It is assumed the contents of DST in the [DST_OFFSET,
DST_OFFSET+LENGTH) range are wholly available. */
@ -1162,8 +1284,6 @@ void
value_contents_copy (struct value *dst, int dst_offset,
struct value *src, int src_offset, int length)
{
require_not_optimized_out (src);
if (src->lazy)
value_fetch_lazy (src);
@ -1216,45 +1336,29 @@ value_optimized_out (struct value *value)
{
/* We can only know if a value is optimized out once we have tried to
fetch it. */
if (!value->optimized_out && value->lazy)
if (VEC_empty (range_s, value->optimized_out) && value->lazy)
value_fetch_lazy (value);
return value->optimized_out;
return !VEC_empty (range_s, value->optimized_out);
}
int
value_optimized_out_const (const struct value *value)
{
return value->optimized_out;
}
/* Mark contents of VALUE as optimized out, starting at OFFSET bytes, and
the following LENGTH bytes. */
void
set_value_optimized_out (struct value *value, int val)
mark_value_bytes_optimized_out (struct value *value, int offset, int length)
{
value->optimized_out = val;
mark_value_bits_optimized_out (value,
offset * TARGET_CHAR_BIT,
length * TARGET_CHAR_BIT);
}
int
value_entirely_optimized_out (const struct value *value)
{
if (!value->optimized_out)
return 0;
if (value->lval != lval_computed
|| !value->location.computed.funcs->check_any_valid)
return 1;
return !value->location.computed.funcs->check_any_valid (value);
}
/* See value.h. */
int
value_bits_valid (const struct value *value, int offset, int length)
void
mark_value_bits_optimized_out (struct value *value, int offset, int length)
{
if (!value->optimized_out)
return 1;
if (value->lval != lval_computed
|| !value->location.computed.funcs->check_validity)
return 0;
return value->location.computed.funcs->check_validity (value, offset,
length);
insert_into_bit_range_vector (&value->optimized_out, offset, length);
}
int
@ -1567,7 +1671,6 @@ value_copy (struct value *arg)
VALUE_FRAME_ID (val) = VALUE_FRAME_ID (arg);
VALUE_REGNUM (val) = VALUE_REGNUM (arg);
val->lazy = arg->lazy;
val->optimized_out = arg->optimized_out;
val->embedded_offset = value_embedded_offset (arg);
val->pointed_to_offset = arg->pointed_to_offset;
val->modifiable = arg->modifiable;
@ -1578,6 +1681,7 @@ value_copy (struct value *arg)
}
val->unavailable = VEC_copy (range_s, arg->unavailable);
val->optimized_out = VEC_copy (range_s, arg->optimized_out);
set_value_parent (val, arg->parent);
if (VALUE_LVAL (val) == lval_computed)
{
@ -2852,24 +2956,19 @@ value_primitive_field (struct value *arg1, int offset,
int bitpos = TYPE_FIELD_BITPOS (arg_type, fieldno);
int container_bitsize = TYPE_LENGTH (type) * 8;
if (arg1->optimized_out)
v = allocate_optimized_out_value (type);
v = allocate_value_lazy (type);
v->bitsize = TYPE_FIELD_BITSIZE (arg_type, fieldno);
if ((bitpos % container_bitsize) + v->bitsize <= container_bitsize
&& TYPE_LENGTH (type) <= (int) sizeof (LONGEST))
v->bitpos = bitpos % container_bitsize;
else
{
v = allocate_value_lazy (type);
v->bitsize = TYPE_FIELD_BITSIZE (arg_type, fieldno);
if ((bitpos % container_bitsize) + v->bitsize <= container_bitsize
&& TYPE_LENGTH (type) <= (int) sizeof (LONGEST))
v->bitpos = bitpos % container_bitsize;
else
v->bitpos = bitpos % 8;
v->offset = (value_embedded_offset (arg1)
+ offset
+ (bitpos - v->bitpos) / 8);
set_value_parent (v, arg1);
if (!value_lazy (arg1))
value_fetch_lazy (v);
}
v->bitpos = bitpos % 8;
v->offset = (value_embedded_offset (arg1)
+ offset
+ (bitpos - v->bitpos) / 8);
set_value_parent (v, arg1);
if (!value_lazy (arg1))
value_fetch_lazy (v);
}
else if (fieldno < TYPE_N_BASECLASSES (arg_type))
{
@ -2882,37 +2981,29 @@ value_primitive_field (struct value *arg1, int offset,
if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1))
value_fetch_lazy (arg1);
/* The optimized_out flag is only set correctly once a lazy value is
loaded, having just loaded some lazy values we should check the
optimized out case now. */
if (arg1->optimized_out)
v = allocate_optimized_out_value (type);
/* We special case virtual inheritance here because this
requires access to the contents, which we would rather avoid
for references to ordinary fields of unavailable values. */
if (BASETYPE_VIA_VIRTUAL (arg_type, fieldno))
boffset = baseclass_offset (arg_type, fieldno,
value_contents (arg1),
value_embedded_offset (arg1),
value_address (arg1),
arg1);
else
boffset = TYPE_FIELD_BITPOS (arg_type, fieldno) / 8;
if (value_lazy (arg1))
v = allocate_value_lazy (value_enclosing_type (arg1));
else
{
/* We special case virtual inheritance here because this
requires access to the contents, which we would rather avoid
for references to ordinary fields of unavailable values. */
if (BASETYPE_VIA_VIRTUAL (arg_type, fieldno))
boffset = baseclass_offset (arg_type, fieldno,
value_contents (arg1),
value_embedded_offset (arg1),
value_address (arg1),
arg1);
else
boffset = TYPE_FIELD_BITPOS (arg_type, fieldno) / 8;
if (value_lazy (arg1))
v = allocate_value_lazy (value_enclosing_type (arg1));
else
{
v = allocate_value (value_enclosing_type (arg1));
value_contents_copy_raw (v, 0, arg1, 0,
TYPE_LENGTH (value_enclosing_type (arg1)));
}
v->type = type;
v->offset = value_offset (arg1);
v->embedded_offset = offset + value_embedded_offset (arg1) + boffset;
v = allocate_value (value_enclosing_type (arg1));
value_contents_copy_raw (v, 0, arg1, 0,
TYPE_LENGTH (value_enclosing_type (arg1)));
}
v->type = type;
v->offset = value_offset (arg1);
v->embedded_offset = offset + value_embedded_offset (arg1) + boffset;
}
else
{
@ -2923,12 +3014,7 @@ value_primitive_field (struct value *arg1, int offset,
if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1))
value_fetch_lazy (arg1);
/* The optimized_out flag is only set correctly once a lazy value is
loaded, having just loaded some lazy values we should check for
the optimized out case now. */
if (arg1->optimized_out)
v = allocate_optimized_out_value (type);
else if (value_lazy (arg1))
if (value_lazy (arg1))
v = allocate_value_lazy (type);
else
{
@ -3657,6 +3743,11 @@ value_fetch_lazy (struct value *val)
{
gdb_assert (value_lazy (val));
allocate_value_contents (val);
/* A value is either lazy, or fully fetched. The
availability/validity is only established as we try to fetch a
value. */
gdb_assert (VEC_empty (range_s, val->optimized_out));
gdb_assert (VEC_empty (range_s, val->unavailable));
if (value_bitsize (val))
{
/* To read a lazy bitfield, read the entire enclosing value. This
@ -3673,10 +3764,11 @@ value_fetch_lazy (struct value *val)
if (value_lazy (parent))
value_fetch_lazy (parent);
if (!value_bits_valid (parent,
TARGET_CHAR_BIT * offset + value_bitpos (val),
value_bitsize (val)))
set_value_optimized_out (val, 1);
if (value_bits_any_optimized_out (parent,
TARGET_CHAR_BIT * offset + value_bitpos (val),
value_bitsize (val)))
mark_value_bytes_optimized_out (val, value_embedded_offset (val),
TYPE_LENGTH (type));
else if (!unpack_value_bits_as_long (value_type (val),
value_contents_for_printing (parent),
offset,
@ -3751,16 +3843,12 @@ value_fetch_lazy (struct value *val)
if (value_lazy (new_val))
value_fetch_lazy (new_val);
/* If the register was not saved, mark it optimized out. */
if (value_optimized_out (new_val))
set_value_optimized_out (val, 1);
else
{
set_value_lazy (val, 0);
value_contents_copy (val, value_embedded_offset (val),
new_val, value_embedded_offset (new_val),
TYPE_LENGTH (type));
}
/* Copy the contents and the unavailability/optimized-out
meta-data from NEW_VAL to VAL. */
set_value_lazy (val, 0);
value_contents_copy (val, value_embedded_offset (val),
new_val, value_embedded_offset (new_val),
TYPE_LENGTH (type));
if (frame_debug)
{
@ -3813,11 +3901,6 @@ value_fetch_lazy (struct value *val)
else if (VALUE_LVAL (val) == lval_computed
&& value_computed_funcs (val)->read != NULL)
value_computed_funcs (val)->read (val);
/* Don't call value_optimized_out on val, doing so would result in a
recursive call back to value_fetch_lazy, instead check the
optimized_out flag directly. */
else if (val->optimized_out)
/* Keep it optimized out. */;
else
internal_error (__FILE__, __LINE__, _("Unexpected lazy value type."));

View File

@ -33,6 +33,54 @@ struct language_defn;
struct value_print_options;
struct xmethod_worker;
/* Values can be partially 'optimized out' and/or 'unavailable'.
These are distinct states and have different string representations
and related error strings.
'unavailable' has a specific meaning in this context. It means the
value exists in the program (at the machine level), but GDB has no
means to get to it. Such a value is normally printed as
<unavailable>. Examples of how to end up with an unavailable value
would be:
- We're inspecting a traceframe, and the memory or registers the
debug information says the value lives on haven't been collected.
- We're inspecting a core dump, the memory or registers the debug
information says the value lives aren't present in the dump
(that is, we have a partial/trimmed core dump, or we don't fully
understand/handle the core dump's format).
- We're doing live debugging, but the debug API has no means to
get at where the value lives in the machine, like e.g., ptrace
not having access to some register or register set.
- Any other similar scenario.
OTOH, "optimized out" is about what the compiler decided to generate
(or not generate). A chunk of a value that was optimized out does
not actually exist in the program. There's no way to get at it
short of compiling the program differently.
A register that has not been saved in a frame is likewise considered
optimized out, except not-saved registers have a different string
representation and related error strings. E.g., we'll print them as
<not-saved> instead of <optimized out>, as in:
(gdb) p/x $rax
$1 = <not saved>
(gdb) info registers rax
rax <not saved>
If the debug info describes a variable as being in such a register,
we'll still print the variable as <optimized out>. IOW, <not saved>
is reserved for inspecting registers at the machine level.
When comparing value contents, optimized out chunks, unavailable
chunks, and valid contents data are all considered different. See
value_contents_eq for more info.
*/
/* The structure which defines the type of a value. It should never
be possible for a program lval value to survive over a call to the
inferior (i.e. to be put into the history list or an internal
@ -181,14 +229,6 @@ struct lval_funcs
TOVAL is not considered as an lvalue. */
void (*write) (struct value *toval, struct value *fromval);
/* Check the validity of some bits in VALUE. This should return 1
if all the bits starting at OFFSET and extending for LENGTH bits
are valid, or 0 if any bit is invalid. */
int (*check_validity) (const struct value *value, int offset, int length);
/* Return 1 if any bit in VALUE is valid, 0 if they are all invalid. */
int (*check_any_valid) (const struct value *value);
/* If non-NULL, this is used to implement pointer indirection for
this value. This method may return NULL, in which case value_ind
will fall back to ordinary indirection. */
@ -327,16 +367,29 @@ extern int value_fetch_lazy (struct value *val);
exist in the program, at least partially. If the value is lazy,
this may fetch it now. */
extern int value_optimized_out (struct value *value);
extern void set_value_optimized_out (struct value *value, int val);
/* Like value_optimized_out, but don't fetch the value even if it is
lazy. Mainly useful for constructing other values using VALUE as
template. */
extern int value_optimized_out_const (const struct value *value);
/* Given a value, return true if any of the contents bits starting at
OFFSET and extending for LENGTH bits is optimized out, false
otherwise. */
/* Like value_optimized_out, but return false if any bit in the object
is valid. */
extern int value_entirely_optimized_out (const struct value *value);
extern int value_bits_any_optimized_out (const struct value *value,
int bit_offset, int bit_length);
/* Like value_optimized_out, but return true iff the whole value is
optimized out. */
extern int value_entirely_optimized_out (struct value *value);
/* Mark VALUE's content bytes starting at OFFSET and extending for
LENGTH bytes as optimized out. */
extern void mark_value_bytes_optimized_out (struct value *value,
int offset, int length);
/* Mark VALUE's content bits starting at OFFSET and extending for
LENGTH bits as optimized out. */
extern void mark_value_bits_optimized_out (struct value *value,
int offset, int length);
/* Set or return field indicating whether a variable is initialized or
not, based on debugging information supplied by the compiler.
@ -415,13 +468,6 @@ extern struct value *coerce_ref (struct value *value);
extern struct value *coerce_array (struct value *value);
/* Given a value, determine whether the bits starting at OFFSET and
extending for LENGTH bits are valid. This returns nonzero if all
bits in the given range are valid, zero if any bit is invalid. */
extern int value_bits_valid (const struct value *value,
int offset, int length);
/* Given a value, determine whether the bits starting at OFFSET and
extending for LENGTH bits are a synthetic pointer. */
@ -473,35 +519,53 @@ extern void mark_value_bits_unavailable (struct value *value,
its enclosing type chunk, you'd do:
int len = TYPE_LENGTH (check_typedef (value_enclosing_type (val)));
value_available_contents (val, 0, val, 0, len);
value_contents_eq (val, 0, val, 0, len);
Returns true iff the set of available contents match. Unavailable
contents compare equal with unavailable contents, and different
with any available byte. For example, if 'x's represent an
unavailable byte, and 'V' and 'Z' represent different available
bytes, in a value with length 16:
Returns true iff the set of available/valid contents match.
offset: 0 4 8 12 16
contents: xxxxVVVVxxxxVVZZ
Optimized-out contents are equal to optimized-out contents, and are
not equal to non-optimized-out contents.
Unavailable contente are equal to unavailable contents, and are not
equal to non-unavailable contents.
For example, if 'x's represent an unavailable byte, and 'V' and 'Z'
represent different available/valid bytes, in a value with length
16:
offset: 0 4 8 12 16
contents: xxxxVVVVxxxxVVZZ
then:
value_available_contents_eq(val, 0, val, 8, 6) => 1
value_available_contents_eq(val, 0, val, 4, 4) => 1
value_available_contents_eq(val, 0, val, 8, 8) => 0
value_available_contents_eq(val, 4, val, 12, 2) => 1
value_available_contents_eq(val, 4, val, 12, 4) => 0
value_available_contents_eq(val, 3, val, 4, 4) => 0
value_contents_eq(val, 0, val, 8, 6) => 1
value_contents_eq(val, 0, val, 4, 4) => 0
value_contents_eq(val, 0, val, 8, 8) => 0
value_contents_eq(val, 4, val, 12, 2) => 1
value_contents_eq(val, 4, val, 12, 4) => 0
value_contents_eq(val, 3, val, 4, 4) => 0
We only know whether a value chunk is available if we've tried to
read it. As this routine is used by printing routines, which may
be printing values in the value history, long after the inferior is
gone, it works with const values. Therefore, this routine must not
be called with lazy values. */
If 'x's represent an unavailable byte, 'o' represents an optimized
out byte, in a value with length 8:
extern int value_available_contents_eq (const struct value *val1, int offset1,
const struct value *val2, int offset2,
int length);
offset: 0 4 8
contents: xxxxoooo
then:
value_contents_eq(val, 0, val, 2, 2) => 1
value_contents_eq(val, 4, val, 6, 2) => 1
value_contents_eq(val, 0, val, 4, 4) => 0
We only know whether a value chunk is unavailable or optimized out
if we've tried to read it. As this routine is used by printing
routines, which may be printing values in the value history, long
after the inferior is gone, it works with const values. Therefore,
this routine must not be called with lazy values. */
extern int value_contents_eq (const struct value *val1, int offset1,
const struct value *val2, int offset2,
int length);
/* Read LENGTH bytes of memory starting at MEMADDR into BUFFER, which
is (or will be copied to) VAL's contents buffer offset by