Working with data using a client dataset

Like any dataset, you can use client datasets to supply the data for data-aware controls using a data source component. See Chapter 20, "Using data controls"for information on how to display database information in data-aware controls.

Client datasets implement all the properties an methods inherited from TDataSet. For a complete introduction to this generic dataset behavior, see Chapter 24, "Understanding datasets."

In addition, client datasets implement many of the features common to table type datasets such as

  • Sorting records with indexes.
  • Using Indexes to search for records.
  • Limiting records with ranges.
  • Creating master/detail relationships.
  • Controlling read / write access
  • Creating the underlying dataset
  • Emptying the dataset
  • Synchronizing client datasets

For details on these features, see "Using table type datasets" on page 24-25.

Client datasets differ from other datasets in that they hold all their data in memory. Because of this, their support for some database functions can involve additional capabilities or considerations. This chapter describes some of these common functions and the differences introduced by client datasets.

Navigating data in client datasets

If an application uses standard data-aware controls, then a user can navigate through a client dataset's records using the built-in behavior of those controls. You can also navigate programmatically through records using standard dataset methods such as First, Last, Next, and Prior. For more information about these methods, see "Navigating datasets" on page 24-5.

Unlike most datasets, client datasets can also position the cursor at a specific record in the dataset by using the RecNo property. Ordinarily an application uses RecNo to determine the record number of the current record. Client datasets can, however, set RecNo to a particular record number to make that record the current one.

Limiting what records appear

To restrict users to a subset of available data on a temporary basis, applications can use ranges and filters. When you apply a range or a filter, the client dataset does not display all the data in its in-memory cache. Instead, it only displays the data that meets the range or filter conditions. For more information about using filters, see "Displaying and editing a subset of data using filters" on page 24-13. For more information about ranges, see "Limiting records with ranges" on page 24-31.

With most datasets, filter strings are parsed into SQL commands that are then implemented on the database server. Because of this, the SQL dialect of the server limits what operations are used in filter strings. Client datasets implement their own filter support, which includes more operations than that of other datasets. For example, when using a client dataset, filter expressions can include string operators that return substrings, operators that parse date/time values, and much more. Client datasets also allow filters on BLOB fields or complex field types such as ADT fields and array fields.

The various operators and functions that client datasets can use in filters, along with a comparison to other datasets that support filters, is given below:

Table 29.1 Filter support in client datasets

Operator or function

Example

Supported by other datasets

Comment

Comparisons

=

State = 'CA'

Yes

<>

State <> 'CA'

Yes

>=

DateEntered >= '1/1/1998'

Yes

<=

Total <= 100,000

Yes

>

Percentile > 50

Yes

<

Field 1 < Field2

Yes

BLANK

State <> 'CA' or State = BLANK

Yes

Blank records do not appear unless explicitly included in the filter.

IS NULL

Field1 IS NULL

No

IS NOT NULL

Field1 IS NOT NULL

No

Logical operators and State = 'CA' and Country = 'US' Yes or State = 'CA' or State = 'MA' Yes not not (State = 'CA') Yes

Arithmetic operators

Logical operators and State = 'CA' and Country = 'US' Yes or State = 'CA' or State = 'MA' Yes not not (State = 'CA') Yes

Arithmetic operators

+

Total + 5 > 100

Depends on driver

Applies to numbers, strings, or date (time) + number.

-

Fieldl - 7 <> 10

Depends on driver

Applies to numbers, dates, or date (time) - number.

*

Discount * 100 > 20

Depends on driver

Applies to numbers only.

/

Discount > Total / 5

Depends on driver

Applies to numbers only.

Table 29.1 Filter support in client datasets (continued)

Supported

Operator

by other

or function

Example

datasets

Comment

String functions

Upper

Upper(Fieldl) = 'ALWAYS'

No

Lower

Lower(Field1 + Field2) = 'josp'

No

Substring

Substring(DateFld,8) = '1998'

No

Value goes from position of

Substring(DateFld,1,3) = 'JAN'

second argument to end or

number of chars in third

argument. First char has position 1.

Trim

Trim(Field1 + Field2)

No

Removes third argument from

Trim(Field1, '-')

front and back. If no third

argument, trims spaces.

TrimLeft

TrimLeft(StringField)

No

See Trim.

TrimLeft(Field1, '$') <> ''

TrimRight

TrimRight(StringField)

No

See Trim.

TrimRight(Field1, '.') <> ''

DateTime functions

Year

Year(DateField) = 2000

No

Month

Month(DateField) <> 12

No

Day

Day(DateField) = 1

No

Hour

Hour(DateField) < 16

No

Minute

Minute(DateField) = 0

No

Second

Second(DateField) = 30

No

GetDate

GetDate - DateField > 7

No

Represents current date and

time.

Date

DateField = Date(GetDate)

No

Returns the date portion of a

datetime value.

Time

TimeField > Time(GetDate)

No

Returns the time portion of a

datetime value.

Miscellaneous

Like

Memo LIKE '%filters%'

No

Works like SQL-92 without the

ESC clause. When applied to

BLOB fields, FilterOptions

determines whether case is

considered.

In

Day(DateField) in (1,7)

No

Works like SQL-92. Second

argument is a list of values all

with the same type.

*

State = 'M*'

Yes

Wildcard for partial

comparisons.

When applying ranges or filters, the client dataset still stores all of its records in memory. The range or filter merely determines which records are available to controls that navigate or display data from the client dataset.

Note When fetching data from a provider, you can also limit the data that the client dataset stores by supplying parameters to the provider. For details, see "Limiting records with parameters" on page 29-29.

Editing data

Client datasets represent their data as an in-memory data packet. This packet is the value of the client dataset's Data property. By default, however, edits are not stored in the Data property. Instead the insertions, deletions, and modifications (made by users or programmatically) are stored in an internal change log, represented by the Delta property. Using a change log serves two purposes:

  • The change log is required for applying updates to a database server or external provider component.
  • The change log provides sophisticated support for undoing changes.

The LogChanges property lets you disable logging. When LogChanges is True, changes are recorded in the log. When LogChanges is False, changes are made directly to the Data property. You can disable the change log in file-based applications if you do not want the undo support.

Edits in the change log remain there until they are removed by the application. Applications remove edits when

  • Undoing changes
  • Saving changes

Note Saving the client dataset to a file does not remove edits from the change log. When you reload the dataset, the Data and Delta properties are the same as they were when the data was saved.

Undoing changes

Even though a record's original version remains unchanged in Data, each time a user edits a record, leaves it, and returns to it, the user sees the last changed version of the record. If a user or application edits a record a number of times, each changed version of the record is stored in the change log as a separate entry.

Storing each change to a record makes it possible to support multiple levels of undo operations should it be necessary to restore a record's previous state:

  • To remove the last change to a record, call UndoLastChange. UndoLastChange takes a Boolean parameter, FollowChange, that indicates whether to reposition the cursor on the restored record (True), or to leave the cursor on the current record (False). If there are several changes to a record, each call to UndoLastChange removes another layer of edits. UndoLastChange returns a Boolean value indicating success or failure. If the removal occurs, UndoLastChange returns True. Use the ChangeCount property to check whether there are more changes to undo. ChangeCount indicates the number of changes stored in the change log.
  • Instead of removing each layer of changes to a single record, you can remove them all at once. To remove all changes to a record, select the record, and call RevertRecord. RevertRecord removes any changes to the current record from the change log.
  • To restore a deleted record, first set the StatusFilter property to [usDeleted], which makes the deleted records "visible." Next, navigate to the record you want to restore and call RevertRecord. Finally, restore the StatusFilter property to [usModified, usInserted, usUnmodified] so that the edited version of the dataset (now containing the restored record) is again visible.
  • At any point during edits, you can save the current state of the change log using the SavePoint property. Reading SavePoint returns a marker into the current position in the change log. Later, if you want to undo all changes that occurred since you read the save point, set SavePoint to the value you read previously. Your application can obtain values for multiple save points. However, once you back up the change log to a save point, the values of all save points that your application read after that one are invalid.
  • You can abandon all changes recorded in the change log by calling CancelUpdates. CancelUpdates clears the change log, effectively discarding all edits to all records. Be careful when you call CancelUpdates. After you call CancelUpdates, you cannot recover any changes that were in the log.

Saving changes

Client datasets use different mechanisms for incorporating changes from the change log, depending on whether the client datasets stores its data in a file or represents data obtained through a provider. Whichever mechanism is used, the change log is automatically emptied when all updates have been incorporated.

File-based applications can simply merge the changes into the local cache represented by the Data property. They do not need to worry about resolving local edits with changes made by other users. To merge the change log into the Data property, call the MergeChangeLog method. "Merging changes into data" on page 29-34 describes this process.

You can't use MergeChangeLog if you are using the client dataset to cache updates or to represent the data from an external provider component. The information in the change log is required for resolving updated records with the data stored in the database (or source dataset). Instead, you call ApplyUpdates, which attempts to write the modifications to the database server or source dataset, and updates the Data property only when the modifications have been successfully committed. See "Applying updates" on page 29-20 for more information about this process.

Constraining data values

Client datasets can enforce constraints on the edits a user makes to data. These constraints are applied when the user tries to post changes to the change log. You can always supply custom constraints. These let you provide your own, application-defined limits on what values users post to a client dataset.

In addition, when client datasets represent server data that is accessed using the BDE, they also enforce data constraints imported from the database server. If the client dataset works with an external provider component, the provider can control whether those constraints are sent to the client dataset, and the client dataset can control whether it uses them. For details on how the provider controls whether constraints are included in data packets, see "Handling server constraints" on page 30-13. For details on how and why client dataset can turn off enforcement of server constraints, see "Handling constraints from the server" on page 29-30.

Specifying custom constraints

You can use the properties of the client dataset's field components to impose your own constraints on what data users can enter. Each field component has two properties that you can use to specify constraints:

  • The DefaultExpression property defines a default value that is assigned to the field if the user does not enter a value. Note that if the database server or source dataset also assigns a default expression for the field, the client dataset's version takes precedence because it is assigned before the update is applied back to the database server or source dataset.
  • The CustomConstraint property lets you assign a constraint condition that must be met before a field value can be posted. Custom constraints defined this way are applied in addition to any constraints imported from the server. For more information about working with custom constraints on field components, see "Creating a custom constraint" on page 25-22.

In addition, you can create record-level constraints using the client dataset's Constraints property. Constraints is a collection of TCheckConstraint objects, where each object represents a separate condition. Use the CustomConstraint property of a TCheckConstraint object to add your own constraints that are checked when you post records.

Sorting and indexing

Using indexes provides several benefits to your applications:

  • They allow client datasets to locate data quickly.
  • They let you apply ranges to limit the available records.
  • They let your application set up relationships with other datasets such as lookup tables or master/detail forms.
  • They specify the order in which records appear.

If a client dataset represents server data or uses an external provider, it inherits a default index and sort order based on the data it receives. The default index is called DEFAULT_ORDER. You can use this ordering, but you cannot change or delete the index.

In addition to the default index, the client dataset maintains a second index, called CHANGEINDEX, on the changed records stored in the change log (Delta property). CHANGEINDEX orders all records in the client dataset as they would appear if the changes specified in Delta were applied. CHANGEINDEX is based on the ordering inherited from DEFAULT_ORDER. As with DEFAULT_ORDER, you cannot change or delete the CHANGEINDEX index.

You can use other existing indexes, and you can create your own indexes. The following sections describe how to create and use indexes with client datasets.

Note You may also want to review the material on indexes in table type datasets, which also applies to client datasets. This material is in "Sorting records with indexes" on page 24-26 and "Limiting records with ranges" on page 24-31.

Adding a new index

There are three ways to add indexes to a client dataset:

• To create a temporary index at runtime that sorts the records in the client dataset, you can use the IndexFieldNames property. Specify field names, separated by semicolons. Ordering of field names in the list determines their order in the index.

This is the least powerful method of adding indexes. You can't specify a descending or case-insensitive index, and the resulting indexes do not support grouping. These indexes do not persist when you close the dataset, and are not saved when you save the client dataset to a file.

  • To create an index at runtime that can be used for grouping, call Addlndex. Addlndex lets you specify the properties of the index, including
  • The name of the index. This can be used for switching indexes at runtime.
  • The fields that make up the index. The index uses these fields to sort records and to locate records that have specific values on these fields.
  • How the index sorts records. By default, indexes impose an ascending sort order (based on the machine's locale). This default sort order is case-sensitive. You can set options to make the entire index case-insensitive or to sort in descending order. Alternately, you can provide a list of fields to be sorted case-insensitively and a list of fields to be sorted in descending order.
  • The default level of grouping support for the index.

Indexes created with AddIndex do not persist when the client dataset is closed. (That is, they are lost when you reopen the client dataset). You can't call AddIndex when the dataset is closed. Indexes you add using AddIndex are not saved when you save the client dataset to a file.

• The third way to create an index is at the time the client dataset is created. Before creating the client dataset, specify the desired indexes using the IndexDefs property. The indexes are then created along with the underlying dataset when you call CreateDataSet. See "Creating and deleting tables" on page 24-38 for more information about creating client datasets.

As with AddIndex, indexes you create with the dataset support grouping, can sort in ascending order on some fields and descending order on others, and can be case insensitive on some fields and case sensitive on others. Indexes created this way always persist and are saved when you save the client dataset to a file.

Tip You can index and sort on internally calculated fields with client datasets.

Deleting and switching indexes

To remove an index you created for a client dataset, call DeleteIndex and specify the name of the index to remove. You cannot remove the DEFAULT_ORDER and CHANGEINDEX indexes.

To use a different index when more than one index is available, use the IndexName property to select the index to use. At design time, you can select from available indexes in IndexName property drop-down box in the Object Inspector.

Using indexes to group data

When you use an index in your client dataset, it automatically imposes a sort order on the records. Because of this order, adjacent records usually contain duplicate values on the fields that make up the index. For example, consider the following fragment from an orders table that is indexed on the SalesRep and Customer fields:

SalesRep Customer OrderNo Amount

12 3 200

Because of the sort order, adjacent values in the SalesRep column are duplicated. Within the records for SalesRep 1, adjacent values in the Customer column are duplicated. That is, the data is grouped by SalesRep, and within the SalesRep group it is grouped by Customer. Each grouping has an associated level. In this case, the SalesRep group has level 1 (because it is not nested in any other groups) and the Customer group has level 2 (because it is nested in the group with level 1). Grouping level corresponds to the order of fields in the index.

Client datasets let you determine where the current record lies within any given grouping level. This allows your application to display records differently, depending on whether they are the first record in the group, in the middle of a group, or the last record in a group. For example, you might want to display a field value only if it is on the first record of the group, eliminating the duplicate values. To do this with the previous table results in the following:

SalesRep Customer OrderNo Amount

11 5 100

2 50

3 4 200

To determine where the current record falls within any group, use the GetGroupState method. GetGroupState takes an integer giving the level of the group and returns a value indicating where the current record falls the group (first record, last record, or neither).

When you create an index, you can specify the level of grouping it supports (up to the number of fields in the index). GetGroupState can't provide information about groups beyond that level, even if the index sorts records on additional fields.

Representing calculated values

As with any dataset, you can add calculated fields to your client dataset. These are fields whose values you calculate dynamically, usually based on the values of other fields in the same record. For more information about using calculated fields, see "Defining a calculated field" on page 25-7.

Client datasets, however, let you optimize when fields are calculated by using internally calculated fields. For more information on internally calculated fields, see "Using internally calculated fields in client datasets" below.

You can also tell client datasets to create calculated values that summarize the data in several records using maintained aggregates. For more information on maintained aggregates, see "Using maintained aggregates" on page 29-11.

Using internally calculated fields in client datasets

In other datasets, your application must compute the value of calculated fields every time the record changes or the user edits any fields in the current record. It does this in an OnCalcFields event handler.

While you can still do this, client datasets let you minimize the number of times calculated fields must be recomputed by saving calculated values in the client dataset's data. When calculated values are saved with the client dataset, they must still be recomputed when the user edits the current record, but your application need not recompute values every time the current record changes. To save calculated values in the client dataset's data, use internally calculated fields instead of calculated fields.

Internally calculated fields, just like calculated fields, are calculated in an OnCalcFields event handler. However, you can optimize your event handler by checking the State property of your client dataset. When State is dsInternalCalc, you must recompute internally calculated fields. When State is dsCalcFields, you need only recompute regular calculated fields.

To use internally calculated fields, you must define the fields as internally calculated before you create the client dataset. Depending on whether you use persistent fields or field definitions, you do this in one of the following ways:

  • If you use persistent fields, define fields as internally calculated by selecting InternalCalc in the Fields editor.
  • If you use field definitions, set the InternalCalcField property of the relevant field definition to True.

Note Other types of datasets use internally calculated fields. However, with other datasets, you do not calculate these values in an OnCalcFields event handler. Instead, they are computed automatically by the BDE or remote database server.

Using maintained aggregates

Client datasets provide support for summarizing data over groups of records. Because these summaries are automatically updated as you edit the data in the dataset, this summarized data is called a "maintained aggregate."

In their simplest form, maintained aggregates let you obtain information such as the sum of all values in a column of the client dataset. They are flexible enough, however, to support a variety of summary calculations and to provide subtotals over groups of records defined by a field in an index that supports grouping.

Specifying aggregates

To specify that you want to calculate summaries over the records in a client dataset, use the Aggregates property. Aggregates is a collection of aggregate specifications (TAggregate). You can add aggregate specifications to your client dataset using the Collection Editor at design time, or using the Add method of Aggregates at runtime. If you want to create field components for the aggregates, create persistent fields for the aggregated values in the Fields Editor.

Note When you create aggregated fields, the appropriate aggregate objects are added to the client dataset's Aggregates property automatically. Do not add them explicitly when creating aggregated persistent fields. For details on creating aggregated persistent fields, see "Defining an aggregate field" on page 25-10.

For each aggregate, the Expression property indicates the summary calculation it represents. Expression can contain a simple summary expression such as

Sum(Fieldl)

or a complex expression that combines information from several fields, such as

Sum(Qty * Price) - Sum(AmountPaid) Aggregate expressions include one or more of the summary operators in Table 29.2

Table 29.2 Summary operators for maintained aggregates

Operator Use

Sum Totals the values for a numeric field or expression

Avg Computes the average value for a numeric or date-time field or expression

Count Specifies the number of non-blank values for a field or expression

Min Indicates the minimum value for a string, numeric, or date-time field or expression

Max Indicates the maximum value for a string, numeric, or date-time field or expression

The summary operators act on field values or on expressions built from field values using the same operators you use to create filters. (You can't nest summary operators, however.) You can create expressions by using operators on summarized values with other summarized values, or on summarized values and constants. However, you can't combine summarized values with field values, because such expressions are ambiguous (there is no indication of which record should supply the field value.) These rules are illustrated in the following expressions:

Sum(Qty * Price)

{legal -

Max(Fieldl) - Max(Field2)

{legal -

Avg(DiscountRate) * 100

{legal -

Min(Sum(Field1))

{illegal

Count(Fieldl) - Field2

{illegal

summary of an expression on fields } expression on summaries } expression of summary and constant } -- nested summaries } -- expression of summary and field }

summary of an expression on fields } expression on summaries } expression of summary and constant } -- nested summaries } -- expression of summary and field }

Aggregating over groups of records

By default, maintained aggregates are calculated so that they summarize all the records in the client dataset. However, you can specify that you want to summarize over the records in a group instead. This lets you provide intermediate summaries such as subtotals for groups of records that share a common field value.

Before you can specify a maintained aggregate over a group of records, you must use an index that supports the appropriate grouping. See "Using indexes to group data" on page 29-9 for information on grouping support.

Once you have an index that groups the data in the way you want it summarized, specify the IndexName and GroupingLevel properties of the aggregate to indicate what index it uses, and which group or subgroup on that index defines the records it summarizes.

For example, consider the following fragment from an orders table that is grouped by SalesRep and, within SalesRep, by Customer:

SalesRep Customer OrderNo Amount

12 3 200

The following code sets up a maintained aggregate that indicates the total amount for each sales representative:

Agg.Expression := 'Sum(Amount)'; Agg.IndexName := 'SalesCust'; Agg.GroupingLevel := 1; Agg.AggregateName := 'Total for Rep';

To add an aggregate that summarizes for each customer within a given sales representative, create a maintained aggregate with level 2.

Maintained aggregates that summarize over a group of records are associated with a specific index. The Aggregates property can include aggregates that use different indexes. However, only the aggregates that summarize over the entire dataset and those that use the current index are valid. Changing the current index changes which aggregates are valid. To determine which aggregates are valid at any time, use the ActiveAggs property.

Obtaining aggregate values

To get the value of a maintained aggregate, call the Value method of the TAggregate object that represents the aggregate. Value returns the maintained aggregate for the group that contains the current record of the client dataset.

When you are summarizing over the entire client dataset, you can call Value at any time to obtain the maintained aggregate. However, when you are summarizing over grouped information, you must be careful to ensure that the current record is in the group whose summary you want. Because of this, it is a good idea to obtain aggregate values at clearly specified times, such as when you move to the first record of a group or when you move to the last record of a group. Use the GetGroupState method to determine where the current record falls within a group.

To display maintained aggregates in data-aware controls, use the Fields editor to create a persistent aggregate field component. When you specify an aggregate field in the Fields editor, the client dataset's Aggregates is automatically updated to include the appropriate aggregate specification. The AggFields property contains the new aggregated field component, and the FindField method returns it.

Copying data from another dataset

To copy the data from another dataset at design time, right click the client dataset and choose Assign Local Data. A dialog appears listing all the datasets available in your project. Select the one whose data and structure you want to copy and choose OK. When you copy the source dataset, your client dataset is automatically activated.

To copy from another dataset at runtime, you can assign its data directly or, if the source is another client dataset, you can clone the cursor.

Assigning data directly

You can use the client dataset's Data property to assign data to a client dataset from another dataset. Data is a data packet in the form of an OleVariant. A data packet can come from another client dataset or from any other dataset by using a provider. Once a data packet is assigned to Data, its contents are displayed automatically in data-aware controls connected to the client dataset by a data source component.

When you open a client dataset that represents server data or that uses an external provider component, data packets are automatically assigned to Data.

When your client dataset does not use a provider, you can copy the data from another client dataset as follows:

ClientDataSetl.Data := ClientDataSet2.Data;

Note When you copy the Data property of another client dataset, you copy the change log as well, but the copy does not reflect any filters or ranges that have been applied. To include filters or ranges, you must clone the source dataset's cursor instead.

If you are copying from a dataset other than a client dataset, you can create a dataset provider component, link it to the source dataset, and then copy its data:

TempProvider := TDataSetProvider.Create(Forml); TempProvider.DataSet := SourceDataSet; ClientDataSetl.Data := TempProvider.Data; TempProvider.Free;

Note When you assign directly to the Data property, the new data packet is not merged into the existing data. Instead, all previous data is replaced.

If you want to merge changes from another dataset, rather than copying its data, you must use a provider component. Create a dataset provider as in the previous example, but attach it to the destination dataset and instead of copying the data property, use the ApplyUpdates method:

TempProvider := TDataSetProvider.Create(Forml); TempProvider.DataSet := ClientDataSetl;

TempProvider.ApplyUpdates(SourceDataSet.Delta, -1, ErrCount); TempProvider.Free;

Cloning a client dataset cursor

Client datasets use the CloneCursor method to let you work with a second view of the data at runtime. CloneCursor lets a second client dataset share the original client dataset's data. This is less expensive than copying all the original data, but, because the data is shared, the second client dataset can't modify the data without affecting the original client dataset.

CloneCursor takes three parameters: Source specifies the client dataset to clone. The last two parameters (Reset and KeepSettings) indicate whether to copy information other than the data. This information includes any filters, the current index, links to a master table (when the source dataset is a detail set), the ReadOnly property, and any links to a connection component or provider.

When Reset and KeepSettings are False, a cloned client dataset is opened, and the settings of the source client dataset are used to set the properties of the destination. When Reset is True, the destination dataset's properties are given the default values (no index or filters, no master table, ReadOnly is False, and no connection component or provider is specified). When KeepSettings is True, the destination dataset's properties are not changed.

Adding application-specific information to the data

Application developers can add custom information to the client dataset's Data property. Because this information is bundled with the data packet, it is included when you save the data to a file or stream. It is copied when you copy the data to another dataset. Optionally, it can be included with the Delta property so that a provider can read this information when it receives updates from the client dataset.

To save application-specific information with the Data property, use the SetOptionalParam method. This method lets you store an OleVariant that contains the data under a specific name.

To retrieve this application-specific information, use the GetOptionalParam method, passing in the name that was used when the information was stored.

Was this article helpful?

+2 -1

Responses

  • VIOLA
    How to read a field value from clientdataset delphi?
    7 years ago
  • AMANUEL
    How to set clientdataset state in runtime in delphi?
    7 years ago
  • isaias
    Why clientdataset clone cursor reset to first row after update?
    7 years ago

Post a comment