`TrackerBatch` executes a series of SPARQL updates and RDF data
insertions within a transaction.
A batch is created with [method@SparqlConnection.create_batch].
To add resources use [method@Batch.add_resource],
[method@Batch.add_sparql] or [method@Batch.add_statement].
When a batch is ready for execution, use [method@Batch.execute]
or [method@Batch.execute_async]. The batch is executed as a single
transaction, it will succeed or fail entirely.
This object has a single use, after the batch is executed it can
only be finished and freed.
The mapping of blank node labels is global in a `TrackerBatch`,
referencing the same blank node label in different operations in
a batch will resolve to the same resource.
Inserts the RDF data contained in @stream as part of @batch.
The RDF data will be inserted in the given @default_graph if one is provided,
or the anonymous graph if @default_graph is %NULL. Any RDF data that has a
graph specified (e.g. using the `GRAPH` clause in the Trig format) will
be inserted in the specified graph instead of @default_graph.
The @flags argument is reserved for future expansions, currently
%TRACKER_DESERIALIZE_FLAGS_NONE must be passed.
A `TrackerBatch`
Deserialization flags
RDF format of data in stream
Default graph that will receive the RDF data
Input stream with RDF data
Adds the RDF represented by @resource to @batch.
A `TrackerBatch`
RDF graph to insert the resource to
A [class@Resource]
Adds an SPARQL update string to @batch.
A `TrackerBatch`
A SPARQL update string
Adds a [class@SparqlStatement] containing an SPARQL update. The statement will
be executed once in the batch, with the parameters bound as specified in the
variable arguments.
The variable arguments are a NULL terminated set of variable name, type [type@GObject.Type],
and value. The value C type must correspond to the given [type@GObject.Type]. For example, for
a statement that has a single `~name` parameter, it could be given a value for execution
with the following code:
```c
tracker_batch_add_statement (batch, stmt,
"name", G_TYPE_STRING, "John Smith",
NULL);
```
A [class@SparqlStatement] may be used on multiple [method@Batch.add_statement]
calls with the same or different values, on the same or different `TrackerBatch`
objects.
This function should only be called on [class@SparqlStatement] objects
obtained through [method@SparqlConnection.update_statement] or
update statements loaded through [method@SparqlConnection.load_statement_from_gresource].
a `TrackerBatch`
a [class@SparqlStatement] containing a SPARQL update
NULL-terminated list of parameters bound to @stmt, in triplets of name, type and value.
Adds a [class@SparqlStatement] containing an SPARQL update. The statement will
be executed once in the batch, with the values bound as specified by @variable_names
and @values.
For example, for a statement that has a single `~name` parameter,
it could be given a value for execution with the given code:
```c
const char *names = { "name" };
const GValue values[G_N_ELEMENTS (names)] = { 0, };
g_value_init (&values[0], G_TYPE_STRING);
g_value_set_string (&values[0], "John Smith");
tracker_batch_add_statementv (batch, stmt,
G_N_ELEMENTS (names),
names, values);
```
```python
batch.add_statement(stmt, ['name'], ['John Smith']);
```
```js
batch.add_statement(stmt, ['name'], ['John Smith']);
```
A [class@SparqlStatement] may be used on multiple [method@Batch.add_statement]
calls with the same or different values, on the same or different `TrackerBatch`
objects.
This function should only be called on [class@SparqlStatement] objects
obtained through [method@SparqlConnection.update_statement] or
update statements loaded through [method@SparqlConnection.load_statement_from_gresource].
A `TrackerBatch`
A [class@SparqlStatement] containing a SPARQL update
The number of bound parameters
The names of each bound parameter
The values of each bound parameter
Executes the batch. This operations happens synchronously.
%TRUE of there were no errors, %FALSE otherwise
a `TrackerBatch`
Optional [type@Gio.Cancellable]
Executes the batch. This operation happens asynchronously, when
finished @callback will be executed.
A `TrackerBatch`
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@Batch.execute_async].
%TRUE of there were no errors, %FALSE otherwise
A `TrackerBatch`
A [type@Gio.AsyncResult] with the result of the operation
Returns the [class@SparqlConnection] that this batch was created
from.
The SPARQL connection of this batch.
A `TrackerBatch`
The [class@SparqlConnection] the batch belongs to.
This macro essentially does the same thing as
tracker_check_version() but as a pre-processor operation rather
than a run-time operation. It will evaluate true or false based the
version passed in and the version available.
<example>
<title>Simple version check example</title>
An example of how to make sure you have the version of Tracker
installed to run your code.
<programlisting>
if (!TRACKER_CHECK_VERSION (0, 10, 7)) {
g_error ("Tracker version 0.10.7 or above is needed");
}
</programlisting>
</example>
the required major version.
the required minor version.
the required micro version.
Flags affecting deserialization from a RDF data format.
No flags.
`TrackerEndpoint` is a helper object to make RDF triple stores represented
by a [class@SparqlConnection] publicly available to other processes/hosts.
This is a base abstract object, see [class@EndpointDBus] to make
RDF triple stores available to other processes in the same machine, and
[class@EndpointHttp] to make it available to other hosts in the
network.
When the RDF triple store represented by a [class@SparqlConnection]
is made public this way, other peers may connect to the database using
[ctor@SparqlConnection.bus_new] or [ctor@SparqlConnection.remote_new]
to access this endpoint exclusively, or they may use the `SERVICE <uri> { ... }` SPARQL
syntax from their own [class@SparqlConnection]s to expand their data set.
By default, and as long as the underlying [class@SparqlConnection]
allows SPARQL updates and RDF graph changes, endpoints will allow updates
and modifications to happen through them. Use [method@Endpoint.set_readonly]
to change this behavior.
By default, endpoints allow access to every RDF graph in the triple store
and further external SPARQL endpoints to the queries performed on it. Use
[method@Endpoint.set_allowed_graphs] and
[method@Endpoint.set_allowed_services] to change this behavior. Users do
not typically need to do this for D-Bus endpoints, as these do already have a layer
of protection with the Tracker portal. This is the mechanism used by the portal
itself. This access control API may not interoperate with other SPARQL endpoint
implementations than Tracker.
Returns the list of RDF graphs that the endpoint allows
access for.
The list of allowed RDF graphs
The endpoint
Returns the list of external SPARQL endpoints that are
allowed to be accessed through this endpoint.
The list of allowed services
The endpoint
Returns whether the endpoint is readonly, thus SPARQL update
queries are disallowed.
%TRUE if the endpoint is readonly
The endpoint
Returns the [class@SparqlConnection] that this endpoint proxies
to a wider audience.
The proxied SPARQL connection
a `TrackerEndpoint`
Sets the list of RDF graphs that this endpoint will allow access
for. Any explicit (e.g. `GRAPH` keyword) or implicit (e.g. through the
default anonymous graph) access to RDF graphs unespecified in this
list in SPARQL queries will be seen as if those graphs did not exist, or
(equivalently) had an empty set. Changes to these graphs through SPARQL
updates will also be disallowed.
If @graphs is %NULL, access will be allowed to every RDF graph stored
in the endpoint, this is the default behavior. If you want to forbid access
to all RDF graphs, use an empty list.
The empty string (`""`) is allowed as a special value, to allow access
to the stock anonymous graph. All graph names are otherwise dependent
on the endpoint and its contained data.
The endpoint
List of allowed graphs, or %NULL to allow all graphs
Sets the list of external SPARQL endpoints that this endpoint
will allow access for. Access through the `SERVICE` SPARQL syntax
will fail for services not specified in this list.
If @services is %NULL, access will be allowed to every external endpoint,
this is the default behavior. If you want to forbid access to all
external SPARQL endpoints, use an empty list.
This affects both remote SPARQL endpoints accessed through HTTP,
and external SPARQL endpoints offered through D-Bus. For the latter,
the following syntax is allowed to describe them as an URI:
`DBUS_URI = 'dbus:' [ ('system' | 'session') ':' ]? dbus-name [ ':' object-path ]?`
If the system/session part is omitted, it will default to the session
bus. If the object path is omitted, the `/org/freedesktop/Tracker3/Endpoint`
[class@EndpointDBus] default will be assumed.
The endpoint
List of allowed services, or %NULL to allow all services
Sets whether the endpoint will be readonly. Readonly endpoints
will not allow SPARQL update queries. The underlying
[class@SparqlConnection] may be readonly of its own, this
method does not change its behavior in any way.
The endpoint
Whether the endpoint will be readonly
RDF graphs that are allowed to be accessed
through queries to this endpoint. See
tracker_endpoint_set_allowed_graphs().
External SPARQL endpoints that are allowed to be
accessed through queries to this endpint. See
tracker_endpoint_set_allowed_services().
Whether the endpoint allows SPARQL updates or not. See
tracker_endpoint_set_readonly().
The [class@SparqlConnection] being proxied by this endpoint.
`TrackerEndpointDBus` makes the RDF data in a [class@SparqlConnection]
accessible to other processes via DBus.
This object is a [class@Endpoint] subclass that exports
a [class@SparqlConnection] so its RDF data is accessible to other
processes through the given [class@Gio.DBusConnection].
```c
// This process already has org.example.Endpoint bus name
endpoint = tracker_endpoint_dbus_new (sparql_connection,
dbus_connection,
NULL,
NULL,
&error);
// From another process
connection = tracker_sparql_connection_bus_new ("org.example.Endpoint",
NULL,
dbus_connection,
&error);
```
The `TrackerEndpointDBus` will manage a DBus object at the given path
with the `org.freedesktop.Tracker3.Endpoint` interface, if no path is
given the object will be at the default `/org/freedesktop/Tracker3/Endpoint`
location.
Access to D-Bus endpoints may be managed via the
[signal@EndpointDBus::block-call] signal, the boolean
return value expressing whether the request is blocked or not.
Inspection of the requester address is left up to the user. The
default value allows all requests independently of their provenance.
However, moderating access to D-Bus interfaces is typically not necessary
in user code, as access to public D-Bus endpoints will be transparently
managed through the Tracker portal service for applications sandboxed
via XDG portals. These already have access to D-Bus SPARQL endpoints and
their data naturally filtered as defined in the application manifest.
A `TrackerEndpointDBus` may be created on a different thread/main
context from the one that created [class@SparqlConnection].
Registers a Tracker endpoint object at @object_path on @dbus_connection.
The default object path is `/org/freedesktop/Tracker3/Endpoint`.
a `TrackerEndpointDBus` object.
The [class@SparqlConnection] being made public
#GDBusConnection to expose the DBus object over
The object path to use, or %NULL to use the default
Optional [type@Gio.Cancellable]
The [class@Gio.DBusConnection] where the connection is proxied through.
The DBus object path that this endpoint manages.
`TrackerEndpointHttp` makes the RDF data in a [class@SparqlConnection]
accessible to other hosts via HTTP.
This object is a [class@Endpoint] subclass that exports
a [class@SparqlConnection] so its RDF data is accessible via HTTP
requests on the given port. This endpoint implementation is compliant
with the [SPARQL protocol specifications](https://www.w3.org/TR/2013/REC-sparql11-protocol-20130321/)
and may interoperate with other implementations.
```c
// This host has "example.local" hostname
endpoint = tracker_endpoint_http_new (sparql_connection,
8080,
tls_certificate,
NULL,
&error);
// From another host
connection = tracker_sparql_connection_remote_new ("http://example.local:8080/sparql");
```
Access to HTTP endpoints may be managed via the
[signal@EndpointHttp::block-remote-address] signal, the boolean
return value expressing whether the connection is blocked or not.
Inspection of the requester address is left up to the user. The
default value allows all requests independently of their provenance,
users are encouraged to add a handler.
If the provided [class@Gio.TlsCertificate] is %NULL, the endpoint will allow
plain HTTP connections. Users are encouraged to provide a certificate
in order to use HTTPS.
As a security measure, and in compliance specifications,
the HTTP endpoint does not handle database updates or modifications in any
way. The database content is considered to be entirely managed by the
process that creates the HTTP endpoint and owns the [class@SparqlConnection].
A `TrackerEndpointHttp` may be created on a different thread/main
context from the one that created [class@SparqlConnection].
Sets up a Tracker endpoint to listen via HTTP, in the given @port.
If @certificate is not %NULL, HTTPS may be used to connect to the
endpoint.
a `TrackerEndpointHttp` object.
The [class@SparqlConnection] being made public
HTTP port to handle incoming requests
Optional [type@Gio.TlsCertificate] to use for encription
Optional [type@Gio.Cancellable]
[class@Gio.TlsCertificate] to encrypt the communication.
HTTP port used to listen requests.
Allows control over the connections stablished. The given
address is that of the requesting peer.
Returning %FALSE in this handler allows the connection,
returning %TRUE blocks it. The default with no signal
handlers connected is %FALSE.
The socket address of the remote connection
The major version of the Tracker library.
Like #tracker_major_version, but intended to be used at application compile time.
The micro version of the Tracker library.
Like #tracker_micro_version, but intended to be used at application compile time.
The minor version of the Tracker library.
Like #tracker_minor_version, but intended to be used at application compile time.
`TrackerNamespaceManager` object represents a mapping between namespaces and
their shortened prefixes.
This object keeps track of namespaces, and allows you to assign
short prefixes for them to avoid frequent use of full namespace IRIs. The syntax
used is that of [Compact URIs (CURIEs)](https://www.w3.org/TR/2010/NOTE-curie-20101216).
Usually you will want to use a namespace manager obtained through
[method@SparqlConnection.get_namespace_manager] from the
[class@SparqlConnection] that manages the RDF data, as that will
contain all prefixes and namespaces that are pre-defined by its ontology.
Creates a new, empty `TrackerNamespaceManager` instance.
a new `TrackerNamespaceManager` instance
Returns the global `TrackerNamespaceManager` that contains a set of well-known
namespaces and prefixes, such as `rdf:`, `rdfs:`, `nie:`, `tracker:`, etc.
Note that the list of prefixes and namespaces is hardcoded in
libtracker-sparql. It may not correspond with the installed set of
ontologies, if they have been modified since they were installed.
Use [method@SparqlConnection.get_namespace_manager] instead.
a global, shared `TrackerNamespaceManager` instance
Adds @prefix as the recognised abbreviaton of @namespace.
Only one prefix is allowed for a given namespace, and all prefixes must
be unique.
Since 3.3, The `TrackerNamespaceManager` instances obtained through
[method@SparqlConnection.get_namespace_manager] are "sealed",
this API call should not performed on those.
A `TrackerNamespaceManager`
a short, unique prefix to identify @namespace
the URL of the given namespace
If @uri begins with one of the namespaces known to this
`TrackerNamespaceManager`, then the return value will be the
compressed URI. Otherwise, %NULL will be returned.
(nullable): the compressed URI
a `TrackerNamespaceManager`
a URI or compact URI
If @compact_uri begins with one of the prefixes known to this
`TrackerNamespaceManager`, then the return value will be the
expanded URI. Otherwise, a copy of @compact_uri will be returned.
The possibly expanded URI in a newly-allocated string.
a `TrackerNamespaceManager`
a URI or compact URI
Calls @func for each known prefix / URI pair.
a `TrackerNamespaceManager`
the function to call for each prefix / URI pair
user data to pass to the function
Returns whether @prefix is known.
%TRUE if the `TrackerNamespaceManager` knows about @prefix, %FALSE otherwise
a `TrackerNamespaceManager`
a string
Looks up the namespace URI corresponding to @prefix, or %NULL if the prefix
is not known.
a string owned by the `TrackerNamespaceManager`, or %NULL
a `TrackerNamespaceManager`
a string
Writes out all namespaces as `@prefix` statements in
the [Turtle](https://www.w3.org/TR/turtle/) RDF format.
a newly-allocated string
a `TrackerNamespaceManager`
`TrackerNotifier` allows receiving notification on changes
in the data stored by a [class@SparqlConnection].
This object may be created through [method@SparqlConnection.create_notifier],
events can then be listened for by connecting to the
[signal@Notifier::events] signal.
Not every change is notified, only RDF resources with a
class that has the [nrl:notify](nrl-ontology.html#nrl:notify)
property defined by the ontology will be notified upon changes.
Database changes are communicated through [struct@NotifierEvent] events on
individual graph/resource pairs. The event type obtained through
[method@NotifierEvent.get_event_type] will determine the type of event.
Insertion of new resources is notified through
%TRACKER_NOTIFIER_EVENT_CREATE events, deletion of
resources is notified through %TRACKER_NOTIFIER_EVENT_DELETE
events, and changes on any property of the resource is notified
through %TRACKER_NOTIFIER_EVENT_UPDATE events.
The events happen in reaction to database changes, after a `TrackerNotifier`
received an event of type %TRACKER_NOTIFIER_EVENT_DELETE, the resource will
not exist anymore and only the information in the [struct@NotifierEvent]
will remain.
Similarly, when receiving an event of type %TRACKER_NOTIFIER_EVENT_UPDATE,
the resource will have already changed, so the data previous to the update is
no longer available.
The [signal@Notifier::events] signal is emitted in the thread-default
main context of the thread where the `TrackerNotifier` instance was created.
Listens to notification events from a remote DBus SPARQL endpoint.
If @connection refers to a message bus (system/session), @service must refer
to a D-Bus name (either unique or well-known). If @connection is a non-message
bus (e.g. a peer-to-peer D-Bus connection) the @service argument may be %NULL.
If the @object_path argument is %NULL, the default
`/org/freedesktop/Tracker3/Endpoint` path will be
used. If @graph is %NULL, all graphs will be listened for.
The signal subscription can be removed with
[method@Notifier.signal_unsubscribe].
Note that this call is not necessary to receive notifications on
a connection obtained through [ctor@SparqlConnection.bus_new],
only to listen to update notifications from additional DBus endpoints.
An ID for this subscription
A `TrackerNotifier`
A [class@Gio.DBusConnection]
DBus service name to subscribe to events for, or %NULL
DBus object path to subscribe to events for, or %NULL
Graph to listen events for, or %NULL
Undoes a signal subscription done through [method@Notifier.signal_subscribe].
The @handler_id argument was previously obtained during signal subscription creation.
A `TrackerNotifier`
A signal subscription handler ID
SPARQL connection to listen to.
Notifies of changes in the Tracker database.
The SPARQL service that originated the events, %NULL for the local store
The graph where the events happened on, %NULL for the default anonymous graph
A [type@GLib.PtrArray] of [struct@NotifierEvent]
The <structname>TrackerNotifierEvent</structname> struct represents a
change event in the stored data.
Returns the event type.
The event type
A `TrackerNotifierEvent`
Returns the tracker:id of the element being notified upon. This is a #gint64
which is used as efficient internal identifier for the resource.
the resource ID
A `TrackerNotifierEvent`
Returns the Uniform Resource Name of the element. This is Tracker's
public identifier for the resource.
This URN is an unique string identifier for the resource being
notified upon, typically of the form `urn:uuid:...`.
The element URN
A `TrackerNotifierEvent`
Notifier event types.
An element was created.
An element was deleted.
An element was updated.
The Prefix of the DC (Dublin Core) namespace
The Prefix of the MFO namespace
The Prefix of the NAO namespace
The Prefix of the NCO namespace
The Prefix of the NFO namespace
The Prefix of the NIE namespace
The Prefix of the RDF namespace
The Prefix of the NRL namespace
The Prefix of the Osinfo namespace
The Prefix of the RDF namespace
The Prefix of the RDFS namespace
The Prefix of the SLO namespace
The Prefix of the Tracker namespace
The Prefix of the XSD namespace
Describes a RDF format to be used in data exchange.
Turtle format
([http://www.w3.org/ns/formats/Turtle](http://www.w3.org/ns/formats/Turtle))
Trig format
([http://www.w3.org/ns/formats/Trig](http://www.w3.org/ns/formats/Trig))
JSON-LD format
([http://www.w3.org/ns/formats/JSON-LD](http://www.w3.org/ns/formats/JSON-LD)).
This value was added in version 3.5.
The total number of RDF formats
`TrackerResource` is an in-memory representation of RDF data about a given resource.
This object keeps track of a set of properties for a given resource, and can
also link to other `TrackerResource` objects to form trees or graphs of RDF
data. See [method@Resource.set_relation] and [method@Resource.set_uri]
on how to link a `TrackerResource` to other RDF data.
`TrackerResource` may also hold data about literal values, added through
the specialized [method@Resource.set_int64], [method@Resource.set_string],
etc family of functions, or the generic [method@Resource.set_gvalue] method.
Since RDF properties may be multi-valued, for every `set` call there exists
another `add` call (e.g. [method@Resource.add_int64], [method@Resource.add_string]
and so on). The `set` methods do also reset any previously value the
property might hold for the given resource.
Resources may have an IRI set at creation through [ctor@Resource.new],
or set afterwards through [method@Resource.set_identifier]. Resources
without a name will represent a blank node, and will be dealt with as such
during database insertions.
`TrackerResource` performs no validation on the data being coherent as per
any ontology. Errors will be found out at the time of using the TrackerResource
for e.g. database updates.
Once the RDF data is built in memory, the (tree of) `TrackerResource` may be
converted to a RDF format through [method@Resource.print_rdf], or
directly inserted into a database through [method@Batch.add_resource]
or [method@SparqlConnection.update_resource].
Creates a TrackerResource instance.
a newly created `TrackerResource`.
A string containing a URI, or %NULL.
Deserializes a `TrackerResource` previously serialized with
[method@Resource.serialize]. It is implied that both ends
use a common [class@NamespaceManager].
A TrackerResource, or %NULL if
deserialization fails.
a [type@GLib.Variant]
Adds a boolean property. Previous values for the same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method corresponds to [xsd:boolean](xsd-ontology.html#xsd:boolean).
The `TrackerResource`
A string identifying the property to modify
The property boolean value
Adds a date property as a [type@GLib.DateTime]. Previous values for the
same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method corresponds to [xsd:date](xsd-ontology.html#xsd:date) and
[xsd:dateTime](xsd-ontology.html#xsd:dateTime).
the `TrackerResource`
a string identifying the property to modify
the property object
Adds a numeric property with double precision. Previous values for the same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method corresponds to [xsd:double](xsd-ontology.html#xsd:double).
the `TrackerResource`
a string identifying the property to modify
the property object
Add @value to the list of values for given property.
You can pass any kind of [struct@GObject.Value] for @value, but serialization functions will
normally only be able to serialize URIs/relationships and fundamental value
types (string, int, etc.).
the `TrackerResource`
a string identifying the property to set
an initialised [struct@GObject.Value]
Adds a numeric property with integer precision. Previous values for the same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method corresponds to [xsd:integer](xsd-ontology.html#xsd:integer).
the `TrackerResource`
a string identifying the property to modify
the property object
Adds a numeric property with 64-bit integer precision. Previous values for the same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method corresponds to [xsd:integer](xsd-ontology.html#xsd:integer).
the `TrackerResource`
a string identifying the property to modify
the property object
Adds a resource property as a `TrackerResource`. Previous values for the same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method applies to properties with a [rdfs:range](rdf-ontology.html#rdfs:range)
that points to a non-literal class (i.e. a subclass of
[rdfs:Resource](rdf-ontology.html#rdfs:Resource)).
This method produces similar RDF to [method@Resource.add_uri],
although in this function the URI will depend on the identifier
set on @resource.
the `TrackerResource`
a string identifying the property to modify
the property object
Adds a string property. Previous values for the same property are kept.
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method corresponds to [xsd:string](xsd-ontology.html#xsd:string).
the `TrackerResource`
a string identifying the property to modify
the property object
Adds a resource property as a `TrackerResource`. Previous values for the same property are kept.
Takes ownership on the given @resource.
This method is meant to RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This method applies to properties with a [rdfs:range](rdf-ontology.html#rdfs:range)
that points to a non-literal class (i.e. a subclass of
[rdfs:Resource](rdf-ontology.html#rdfs:Resource)).
This function produces similar RDF to [method@Resource.add_uri],
although in this function the URI will depend on the identifier
set on @resource. This function takes ownership of @resource.
the `TrackerResource`
a string identifying the property to modify
the property object
Adds a resource property as an URI string. Previous values for the same property are kept.
This method applies to properties with a [rdfs:range](rdf-ontology.html#rdfs:range)
that points to a non-literal class (i.e. a subclass of
[rdfs:Resource](rdf-ontology.html#rdfs:Resource)).
This method is meant for RDF properties allowing multiple values, see
[nrl:maxCardinality](nrl-ontology.html#nrl:maxCardinality).
This function produces similar RDF to [method@Resource.add_relation], although
it requires that the URI is previously known.
the `TrackerResource`
a string identifying the property to modify
the property object
Returns the first boolean object previously assigned to a property.
the first boolean object
A `TrackerResource`
a string identifying the property to look up
Returns the first [type@GLib.DateTime] previously assigned to a property.
the first GDateTime object
A `TrackerResource`
a string identifying the property to look up
Returns the first double object previously assigned to a property.
the first double object
A `TrackerResource`
a string identifying the property to look up
Returns the first integer object previously assigned to a property.
the first integer object
A `TrackerResource`
a string identifying the property to look up
Returns the first integer object previously assigned to a property.
the first integer object
A `TrackerResource`
a string identifying the property to look up
Returns the first resource object previously assigned to a property.
the first resource object
A `TrackerResource`
a string identifying the property to look up
Returns the first string object previously assigned to a property.
the first string object
A `TrackerResource`
a string identifying the property to look up
Returns the first resource object previously assigned to a property.
the first resource object as an URI.
A `TrackerResource`
a string identifying the property to look up
Returns the identifier of a resource.
If the identifier was set to NULL, the identifier returned will be a locally
unique SPARQL blank node identifier, such as `_:123`.
a string owned by the resource
A `TrackerResource`
Gets the list of properties defined in @resource
The list of properties.
a `TrackerResource`
Returns whether the prior values for this property would be deleted
in the SPARQL issued by @resource.
#TRUE if the property would be overwritten
a `TrackerResource`
a string identifying the property to query
Returns the list of all known values of the given property.
a [struct@GLib.List] of
[struct@GObject.Value] instances. The list should be freed with [func@GLib.List.free]
the `TrackerResource`
a string identifying the property to look up
A helper function that compares a `TrackerResource` by its identifier
string.
an integer less than, equal to, or greater than zero, if the
resource identifier is <, == or > than @identifier
a `TrackerResource`
a string identifying the resource
Serialize all the information in @resource as a JSON-LD document.
See <http://www.jsonld.org/> for more information on the JSON-LD
serialization format.
The @namespaces object is used to expand any compact URI values. In most
cases you should pass the one returned by [method@SparqlConnection.get_namespace_manager]
from the connection that is the intended recipient of this data.
Use [method@Resource.print_rdf] instead.
a newly-allocated string containing JSON-LD data.
a `TrackerResource`
a set of prefixed URLs, or %NULL to use the
Nepomuk set
Serialize all the information in @resource into the selected RDF format.
The @namespaces object is used to expand any compact URI values. In most
cases you should pass the one returned by [method@SparqlConnection.get_namespace_manager]
from the connection that is the intended recipient of this data.
a newly-allocated string containing RDF data in the requested format.
a `TrackerResource`
a set of prefixed URLs
RDF format of the printed string
target graph of the resource RDF, or %NULL for the
default graph
Generates a SPARQL command to update a database with the information
stored in @resource.
The @namespaces object is used to expand any compact URI values. In most
cases you should pass the one returned by [method@SparqlConnection.get_namespace_manager]
from the connection that is the intended recipient of this data.
a newly-allocated string containing a SPARQL update command.
a `TrackerResource`
a set of prefixed URLs, or %NULL to use the
Nepomuk set
the URN of the graph the data should be added to,
or %NULL
Serialize all the information in @resource as a Turtle document.
The generated Turtle should correspond to this standard:
<https://www.w3.org/TR/2014/REC-turtle-20140225/>
The @namespaces object is used to expand any compact URI values. In most
cases you should pass the one returned by [method@SparqlConnection.get_namespace_manager]
from the connection that is the intended recipient of this data.
Use [method@Resource.print_rdf] instead.
a newly-allocated string
a `TrackerResource`
a set of prefixed URLs, or %NULL to use the
Nepomuk set
Serializes a `TrackerResource` to a [type@GLib.Variant] in a lossless way.
All child resources are subsequently serialized. It is implied
that both ends use a common [class@NamespaceManager].
A variant describing the resource,
the reference is floating.
A `TrackerResource`
Sets a boolean property. Replaces any previous value.
This method corresponds to [xsd:boolean](xsd-ontology.html#xsd:boolean).
The `TrackerResource`
A string identifying the property to modify
The property boolean value
Sets a date property as a [type@GLib.DateTime]. Replaces any previous value.
This method corresponds to [xsd:date](xsd-ontology.html#xsd:date) and
[xsd:dateTime](xsd-ontology.html#xsd:dateTime).
the `TrackerResource`
a string identifying the property to modify
the property object
Sets a numeric property with double precision. Replaces any previous value.
This method corresponds to [xsd:double](xsd-ontology.html#xsd:double).
The `TrackerResource`
A string identifying the property to modify
The property object
Replace any previously existing value for @property_uri with @value.
When serialising to SPARQL, any properties that were set with this function
will get a corresponding DELETE statement to remove any existing values in
the database.
You can pass any kind of [struct@GObject.Value] for @value, but serialization functions will
normally only be able to serialize URIs/relationships and fundamental value
types (string, int, etc.).
the `TrackerResource`
a string identifying the property to set
an initialised [struct@GObject.Value]
Changes the identifier of a `TrackerResource`. The identifier should be a
URI or compact URI, but this is not necessarily enforced. Invalid
identifiers may cause errors when serializing the resource or trying to
insert the results in a database.
If the identifier is set to %NULL, a SPARQL blank node identifier such as
`_:123` is assigned to the resource.
A `TrackerResource`
a string identifying the resource
Sets a numeric property with integer precision. Replaces any previous value.
This method corresponds to [xsd:integer](xsd-ontology.html#xsd:integer).
The `TrackerResource`
A string identifying the property to modify
The property object
Sets a numeric property with 64-bit integer precision. Replaces any previous value.
This method corresponds to [xsd:integer](xsd-ontology.html#xsd:integer).
the `TrackerResource`
a string identifying the property to modify
the property object
Sets a resource property as a `TrackerResource`. Replaces any previous value.
This method applies to properties with a [rdfs:range](rdf-ontology.html#rdfs:range)
that points to a non-literal class (i.e. a subclass of
[rdfs:Resource](rdf-ontology.html#rdfs:Resource)).
This function produces similar RDF to [method@Resource.set_uri],
although in this function the URI will depend on the identifier
set on @resource.
the `TrackerResource`
a string identifying the property to modify
the property object
Sets a string property. Replaces any previous value.
This method corresponds to [xsd:string](xsd-ontology.html#xsd:string).
the `TrackerResource`
a string identifying the property to modify
the property object
Sets a resource property as a `TrackerResource`. Replaces any previous value.
Takes ownership on the given @resource.
This method applies to properties with a [rdfs:range](rdf-ontology.html#rdfs:range)
that points to a non-literal class (i.e. a subclass of
[rdfs:Resource](rdf-ontology.html#rdfs:Resource)).
This function produces similar RDF to [method@Resource.set_uri],
although in this function the URI will depend on the identifier
set on @resource.
the `TrackerResource`
a string identifying the property to modify
the property object
Sets a resource property as an URI string. Replaces any previous value.
This method applies to properties with a [rdfs:range](rdf-ontology.html#rdfs:range)
that points to a non-literal class (i.e. a subclass of
[rdfs:Resource](rdf-ontology.html#rdfs:Resource)).
This function produces similar RDF to [method@Resource.set_relation], although
it requires that the URI is previously known.
the `TrackerResource`
a string identifying the property to modify
the property object
The URI identifier for this class, or %NULL for a
blank node.
Flags affecting serialization into a RDF data format.
No flags.
`TrackerSparqlConnection` holds a connection to a RDF triple store.
This triple store may be of three types:
- Local to the process, created through [ctor@SparqlConnection.new].
- A HTTP SPARQL endpoint over the network, created through
[ctor@SparqlConnection.remote_new]
- A DBus SPARQL endpoint owned by another process in the same machine, created
through [ctor@SparqlConnection.bus_new]
When creating a local triple store, it is required to give details about its
structure. This is done by passing a location to an ontology, see more
on how are [ontologies defined](ontologies.html). A local database may be
stored in a filesystem location, or it may reside in memory.
A `TrackerSparqlConnection` is private to the calling process, it can be
exposed to other hosts/processes via a [class@Endpoint], see
[ctor@EndpointDBus.new] and [ctor@EndpointHttp.new].
When issuing SPARQL queries and updates, it is recommended that these are
created through [class@SparqlStatement] to avoid the SPARQL
injection class of bugs, see [method@SparqlConnection.query_statement]
and [method@SparqlConnection.update_statement]. For SPARQL updates
it is also possible to use a "builder" approach to generate RDF data, see
[class@Resource]. It is also possible to create [class@SparqlStatement]
objects for SPARQL queries and updates from SPARQL strings embedded in a
[struct@Gio.Resource], see [method@SparqlConnection.load_statement_from_gresource].
To get the best performance, it is recommended that SPARQL updates are clustered
through [class@Batch].
`TrackerSparqlConnection` also offers a number of methods for the simple cases,
[method@SparqlConnection.query] may be used when there is a SPARQL
query string directly available, and the [method@SparqlConnection.update]
family of functions may be used for one-off updates. All functions have asynchronous
variants.
When a SPARQL query is executed, a [class@SparqlCursor] will be obtained
to iterate over the query results.
Depending on the ontology definition, `TrackerSparqlConnection` may emit
notifications whenever resources of certain types get insert, modified or
deleted from the triple store (see [nrl:notify](nrl-ontology.html#nrl:notify).
These notifications can be handled via a [class@Notifier] obtained with
[method@SparqlConnection.create_notifier].
After done with a connection, it is recommended to call [method@SparqlConnection.close]
or [method@SparqlConnection.close_async] explicitly to cleanly close the
connection and prevent consistency checks on future runs. The triple store
connection will be implicitly closed when the `TrackerSparqlConnection` object
is disposed.
A `TrackerSparqlConnection` may be used from multiple threads, asynchronous
updates are executed sequentially on arrival order, asynchronous
queries are dispatched in a thread pool.
If you ever have the need to procedurally compose SPARQL query strings, consider
the use of [func@sparql_escape_string] for literal strings and
the [func@sparql_escape_uri] family of functions for URIs.
@service_name (nullable): The name of the D-Bus service to connect to, or %NULL if not using a message bus.
Connects to a database owned by another process on the
local machine via DBus.
When using a message bus (session/system), the @service_name argument will
be used to describe the remote endpoint, either by unique or well-known D-Bus
names. If not using a message bus (e.g. peer-to-peer D-Bus connections) the
@service_name may be %NULL.
The D-Bus object path of the remote endpoint will be given through
@object_path, %NULL may be used to use the default
`/org/freedesktop/Tracker3/Endpoint` path.
The D-Bus connection used to set up the connection may be given through
the @dbus_connection argument. Using %NULL will resort to the default session
bus.
a new `TrackerSparqlConnection`.
The path to the object, or %NULL to use the default.
The [type@Gio.DBusConnection] to use, or %NULL to use the session bus
Finishes the operation started with [func@SparqlConnection.bus_new_async].
a new `TrackerSparqlConnection`.
A [type@Gio.AsyncResult] with the result of the operation
Creates or opens a process-local database.
This method should only be used for databases owned by the current process.
To connect to databases managed by other processes, use
[ctor@SparqlConnection.bus_new].
If @store is %NULL, the database will be created in memory.
If defined, the @ontology argument must point to a location containing
suitable `.ontology` files in Turtle format. These define the structure of
the triple store. You can learn more about [ontologies](ontologies.html),
or you can use the stock Nepomuk ontologies by calling
[func@sparql_get_ontology_nepomuk].
If opening an existing database, it is possible to pass %NULL as the
@ontology location, the ontology will be introspected from the database.
Passing a %NULL @ontology will raise an error if the database does not exist.
If a database is opened without the #TRACKER_SPARQL_CONNECTION_FLAG_READONLY
flag enabled, and the given @ontology holds differences with the current
data layout, migration to the new structure will be attempted. This operation
may raise an error. In particular, not all migrations are possible without
causing data loss and Tracker will refuse to delete data during a migration.
The database is always left in a consistent state, either prior or posterior
to migration.
Operations on a [class@SparqlConnection] resulting on a
[error@SparqlError.CORRUPT] error will have the event recorded
persistently through a `.meta.corrupted` file alongside the database files.
If the database is opened without the [flags@SparqlConnectionFlags.READONLY]
flag and the file is found, this constructor will attempt to repair the
database. In that situation, this constructor will either return a valid
[class@SparqlConnection] if the database was repaired successfully, or
raise a [error@SparqlError.CORRUPT] error if the database remains
corrupted.
It is considered a developer error to ship ontologies that contain format
errors, or that fail at migrations.
It is encouraged to use `resource:///` URI locations for @ontology wherever
possible, so the triple store structure is tied to the executable binary,
and in order to minimize disk seeks during `TrackerSparqlConnection`
initialization.
a new `TrackerSparqlConnection`.
Connection flags to define the SPARQL connection behavior
The directory that contains the database as a [iface@Gio.File], or %NULL
The directory that contains the database schemas as a [iface@Gio.File], or %NULL
Optional [type@Gio.Cancellable]
Finishes the operation started with [func@SparqlConnection.new_async].
A [type@Gio.AsyncResult] with the result of the operation
Creates a connection to a remote HTTP SPARQL endpoint.
The connection is made using the libsoup HTTP library. The connection will
normally use the `https://` or `http://` protocols.
a new remote `TrackerSparqlConnection`.
Base URI of the remote connection
Connects asynchronously to a database owned by another process on the
local machine via DBus.
The name of the D-Bus service to connect to.
The path to the object, or %NULL to use the default.
The [class@Gio.DBusConnection] to use, or %NULL to use the session bus
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Creates or opens a process-local database asynchronously.
See [ctor@SparqlConnection.new] for more information.
Connection flags to define the SPARQL connection behavior
The directory that contains the database as a [iface@Gio.File], or %NULL
The directory that contains the database schemas as a [iface@Gio.File], or %NULL
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Closes a SPARQL connection.
No other API calls than g_object_unref() should happen after this call.
This call is blocking. All pending updates will be flushed, and the
store databases will be closed orderly. All ongoing SELECT queries
will be cancelled. Notifiers will no longer emit events.
A `TrackerSparqlConnection`
Closes a SPARQL connection asynchronously.
No other API calls than g_object_unref() should happen after this call.
A `TrackerSparqlConnection`
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.close_async].
%FALSE if some error occurred, %TRUE otherwise
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Creates a new [class@Batch] to store and execute SPARQL updates.
If the connection is readonly or cannot issue SPARQL updates, %NULL will be returned.
(nullable): A new [class@Batch]
a `TrackerSparqlConnection`
Creates a new [class@Notifier] to receive notifications about changes in @connection.
See [class@Notifier] documentation for information about how to use this
object.
Connections to HTTP endpoints will return %NULL.
A newly created notifier.
A `TrackerSparqlConnection`
Loads the RDF data contained in @stream into the given @connection.
This is an asynchronous operation, @callback will be invoked when the
data has been fully inserted to @connection.
The RDF data will be inserted in the given @default_graph if one is provided,
or the anonymous graph if @default_graph is %NULL. Any RDF data that has a
graph specified (e.g. using the `GRAPH` clause in the Trig format) will
be inserted in the specified graph instead of @default_graph.
The @flags argument is reserved for future expansions, currently
%TRACKER_DESERIALIZE_FLAGS_NONE must be passed.
A `TrackerSparqlConnection`
Deserialization flags
RDF format of data in stream
Default graph that will receive the RDF data
Input stream with RDF data
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.deserialize_async].
%TRUE if all data was inserted successfully.
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Returns a [class@NamespaceManager] that contains all
prefixes in the ontology of @connection.
a [class@NamespaceManager] with the prefixes of @connection.
A `TrackerSparqlConnection`
Prepares a [class@SparqlStatement] for the SPARQL contained as a [struct@Gio.Resource]
file at @resource_path.
SPARQL Query files typically have the .rq extension. This will use
[method@SparqlConnection.query_statement] or [method@SparqlConnection.update_statement]
underneath to indistinctly return SPARQL query or update statements.
A prepared statement
A `TrackerSparqlConnection`
The resource path of the file to parse.
Optional [type@Gio.Cancellable]
Maps a `TrackerSparqlConnection` onto another through a `private:@handle_name` URI.
This can be accessed via the SERVICE SPARQL syntax in
queries from @connection. E.g.:
```c
tracker_sparql_connection_map_connection (connection,
"other-connection",
other_connection);
```
```sparql
SELECT ?u {
SERVICE <private:other-connection> {
?u a rdfs:Resource
}
}
```
This is useful to interrelate data from multiple
`TrackerSparqlConnection` instances maintained by the same process,
without creating a public endpoint for @service_connection.
@connection may only be a `TrackerSparqlConnection` created via
[ctor@SparqlConnection.new] and [func@SparqlConnection.new_async].
A `TrackerSparqlConnection`
Handle name for @service_connection
a `TrackerSparqlConnection` to use from @connection
Executes a SPARQL query on @connection.
This method is synchronous and will block until the query
is executed. See [method@SparqlConnection.query_async]
for an asynchronous variant.
If the query is partially built from user input or other
untrusted sources, special care is required about possible
SPARQL injection. In order to avoid it entirely, it is recommended
to use [class@SparqlStatement]. The function
[func@sparql_escape_string] exists as a last resort,
but its use is not recommended.
a [class@SparqlCursor] with the results.
A `TrackerSparqlConnection`
String containing the SPARQL query
Optional [type@Gio.Cancellable]
Executes asynchronously a SPARQL query on @connection
If the query is partially built from user input or other
untrusted sources, special care is required about possible
SPARQL injection. In order to avoid it entirely, it is recommended
to use [class@SparqlStatement]. The function
[func@sparql_escape_string] exists as a last resort,
but its use is not recommended.
A `TrackerSparqlConnection`
String containing the SPARQL query
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.query_async].
a [class@SparqlCursor] with the results.
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Prepares the given `SELECT`/`ASK`/`DESCRIBE`/`CONSTRUCT` SPARQL query as a
[class@SparqlStatement].
This prepared statement can be executed through [method@SparqlStatement.execute]
or [method@SparqlStatement.serialize_async] families of functions.
A prepared statement
A `TrackerSparqlConnection`
The SPARQL query
Optional [type@Gio.Cancellable]
Serializes a `DESCRIBE` or `CONSTRUCT` query into the specified RDF format.
This is an asynchronous operation, @callback will be invoked when
the data is available for reading.
The SPARQL endpoint may not support the specified format, in that case
an error will be raised.
The @flags argument is reserved for future expansions, currently
%TRACKER_SERIALIZE_FLAGS_NONE must be passed.
A `TrackerSparqlConnection`
Serialization flags
Output RDF format
SPARQL query
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.serialize_async].
A [class@Gio.InputStream] to read RDF content.
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Executes a SPARQL update on @connection.
This method is synchronous and will block until the update
is finished. See [method@SparqlConnection.update_async]
for an asynchronous variant.
It is recommented to consider the usage of [class@Batch]
to cluster database updates. Frequent isolated SPARQL updates
through this method will have a degraded performance in comparison.
If the query is partially built from user input or other
untrusted sources, special care is required about possible
SPARQL injection. In order to avoid it entirely, it is recommended
to use [class@SparqlStatement], or to build the SPARQL
input through [class@Resource]. The function
[func@sparql_escape_string] exists as a last resort,
but its use is not recommended.
A `TrackerSparqlConnection`
String containing the SPARQL update query
Optional [type@Gio.Cancellable]
Executes asynchronously an array of SPARQL updates. All updates in the
array are handled within a single transaction.
If the query is partially built from user input or other
untrusted sources, special care is required about possible
SPARQL injection. In order to avoid it entirely, it is recommended
to use [class@SparqlStatement], or to build the SPARQL
input through [class@Resource]. The function
[func@sparql_escape_string] exists as a last resort,
but its use is not recommended.
A `TrackerSparqlConnection`
An array of strings containing the SPARQL update queries
The amount of strings you pass as @sparql
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.update_array_async].
#TRUE if there were no errors.
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Executes asynchronously a SPARQL update.
It is recommented to consider the usage of [class@Batch]
to cluster database updates. Frequent isolated SPARQL updates
through this method will have a degraded performance in comparison.
If the query is partially built from user input or other
untrusted sources, special care is required about possible
SPARQL injection. In order to avoid it entirely, it is recommended
to use [class@SparqlStatement], or to build the SPARQL
input through [class@Resource]. The function
[func@sparql_escape_string] exists as a last resort,
but its use is not recommended.
A `TrackerSparqlConnection`
String containing the SPARQL update query
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Executes a SPARQL update and returns the names of the generated blank nodes.
This method is synchronous and will block until the update
is finished. See [method@SparqlConnection.update_blank_async]
for an asynchronous variant.
The @sparql query should be built with [class@Resource], or
its parts correctly escaped using [func@sparql_escape_string],
otherwise SPARQL injection is possible.
The format string of the `GVariant` is `aaa{ss}` (an array of an array
of dictionaries). The first array represents each INSERT that may exist in
the SPARQL string. The second array represents each new node for a given
WHERE clause. The last array holds a string pair with the blank node name
(e.g. `foo` for the blank node `_:foo`) and the URN that was generated for
it. For most updates the first two outer arrays will only contain one item.
This function makes the expectation that blank nodes have
a durable name that persist. The SPARQL and RDF specs define a much more
reduced scope for blank node labels. This function advises a behavior that
goes against that reduced scope, and will directly make the returned values
meaningless if the #TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES flag
is defined in the connection.
Users that want names generated for them, should look for other methods
(e.g. IRIs containing UUIDv4 strings).
a [type@GLib.Variant] with the generated URNs.
A `TrackerSparqlConnection`
String containing the SPARQL update query
Optional [type@Gio.Cancellable]
Executes asynchronously a SPARQL update and returns the names of the generated blank nodes.
See the [method@SparqlConnection.update_blank] documentation to
learn the differences with [method@SparqlConnection.update].
See [method@SparqlConnection.update_blank].
A `TrackerSparqlConnection`
String containing the SPARQL update query
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.update_blank_async].
This method returns the URNs of the generated nodes, if any. See the
[method@SparqlConnection.update_blank] documentation for the interpretation
of the returned [type@GLib.Variant].
See [method@SparqlConnection.update_blank].
a [type@GLib.Variant] with the generated URNs.
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Finishes the operation started with [method@SparqlConnection.update_async].
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Inserts a resource as described by @resource on the given @graph.
This method is synchronous and will block until the update
is finished. See [method@SparqlConnection.update_resource_async]
for an asynchronous variant.
It is recommented to consider the usage of [class@Batch]
to cluster database updates. Frequent isolated SPARQL updates
through this method will have a degraded performance in comparison.
#TRUE if there were no errors.
A `TrackerSparqlConnection`
RDF graph where the resource should be inserted/updated, or %NULL for the default graph
A [class@Resource]
Optional [type@Gio.Cancellable]
Inserts asynchronously a resource as described by @resource on the given @graph.
It is recommented to consider the usage of [class@Batch]
to cluster database updates. Frequent isolated SPARQL updates
through this method will have a degraded performance in comparison.
A `TrackerSparqlConnection`
RDF graph where the resource should be inserted/updated, or %NULL for the default graph
A [class@Resource]
Optional [type@Gio.Cancellable]
User-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
User-defined data to be passed to @callback
Finishes the operation started with [method@SparqlConnection.update_resource_async].
#TRUE if there were no errors.
A `TrackerSparqlConnection`
A [type@Gio.AsyncResult] with the result of the operation
Prepares the given `INSERT`/`DELETE` SPARQL as a [class@SparqlStatement].
This prepared statement can be executed through
the [method@SparqlStatement.update] family of functions.
A prepared statement
A `TrackerSparqlConnection`
The SPARQL update
Optional [type@Gio.Cancellable]
Connection flags to modify #TrackerSparqlConnection behavior.
No flags.
Connection is readonly.
Word stemming is applied to FTS search terms.
Unaccenting is applied to FTS search terms.
FTS Search terms are filtered through a stop word list. This flag is deprecated since Tracker 3.6, and will do nothing.
Ignore numbers in FTS search terms.
Treat blank nodes as specified in
SPARQL 1.1 syntax. Namely, they cannot be used as URIs. This flag is available since Tracker 3.3.
`TrackerSparqlCursor` provides the methods to iterate the results of a SPARQL query.
Cursors are obtained through e.g. [method@SparqlStatement.execute]
or [method@SparqlConnection.query] after the SPARQL query has been
executed.
When created, a cursor does not point to any element, [method@SparqlCursor.next]
is necessary to iterate one by one to the first (and following) results.
When the cursor iterated across all rows in the result set, [method@SparqlCursor.next]
will return %FALSE with no error set.
On each row, it is possible to extract the result values through the
[method@SparqlCursor.get_integer], [method@SparqlCursor.get_string], etc... family
of methods. The column index of those functions starts at 0. The number of columns is
dependent on the SPARQL query issued, but may be checked at runtime through the
[method@SparqlCursor.get_n_columns] method.
After a cursor is iterated, it is recommended to call [method@SparqlCursor.close]
explicitly to free up resources for other users of the same [class@SparqlConnection],
this is especially important in garbage collected languages. These resources
will be also implicitly freed on cursor object finalization.
It is possible to use a given `TrackerSparqlCursor` in other threads than
the one it was created from. It must be however used from just one thread
at any given time.
Closes the cursor. The object can only be freed after this call.
a `TrackerSparqlCursor`
Retrieve a boolean for the current row in @column.
If the row/column do not have a boolean value, the result is
undefined, see [method@SparqlCursor.get_value_type].
a boolean value.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
Returns the [class@SparqlConnection] associated with this
`TrackerSparqlCursor`.
the cursor [class@SparqlConnection]. The
returned object must not be unreferenced by the caller.
a `TrackerSparqlCursor`
Retrieves a [type@GLib.DateTime] pointer for the current row in @column.
[type@GLib.DateTime] object, or %NULL if the given column does not
contain a [xsd:date](xsd-ontology.html#xsd:date) or [xsd:dateTime](xsd-ontology.html#xsd:dateTime).
a `TrackerSparqlCursor`
Column number to retrieve (first one is 0)
Retrieve a double for the current row in @column.
If the row/column do not have a integer or double value, the result is
undefined, see [method@SparqlCursor.get_value_type].
a double value.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
Retrieve an integer for the current row in @column.
If the row/column do not have an integer value, the result is
undefined, see [method@SparqlCursor.get_value_type].
a 64-bit integer value.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
Retrieves a string representation of the data in the current
row in @column. If the string has language information (i.e. it is
a `rdf:langString`](rdf-ontology.html#rdf:langString)), the language
tag will be returned in the location provided through @langtag. This
language tag will typically be in a format conforming
[RFC 5646](https://www.rfc-editor.org/rfc/rfc5646.html).
a string which must not be freed. %NULL is returned if
the column is not in the `[0, n_columns]` range, or if the row/column
refer to a nullable optional value in the result set.
a `TrackerSparqlCursor`
column number to retrieve
language tag of the returned string, or %NULL if the
string has no language tag
length of the returned string
Retrieves the number of columns available in the result set.
This method should only be called after a successful
[method@SparqlCursor.next], otherwise its return value
will be undefined.
The number of columns returned in the result set.
a `TrackerSparqlCursor`
Retrieves a string representation of the data in the current
row in @column.
Any type may be converted to a string. If the value is not bound
(See [method@SparqlCursor.is_bound]) this method will return %NULL.
a string which must not be freed. %NULL is returned if
the column is not in the `[0, n_columns]` range, or if the row/column
refer to a nullable optional value in the result set.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
length of the returned string, or %NULL
Returns the data type bound to the current row and the given @column.
If the column is unbound, the value will be %TRACKER_SPARQL_VALUE_TYPE_UNBOUND.
See also [method@SparqlCursor.is_bound].
Values of type #TRACKER_SPARQL_VALUE_TYPE_RESOURCE and
#TRACKER_SPARQL_VALUE_TYPE_BLANK_NODE can be considered equivalent, the
difference is the resource being referenced as a named IRI or a blank
node.
All other [enum@SparqlValueType] value types refer to literal values.
a [enum@SparqlValueType] expressing the content type of
the given column for the current row.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
Retrieves the name of the given @column.
This name will be defined at the SPARQL query, either
implicitly from the names of the variables returned in
the resultset, or explicitly through the `AS ?var` SPARQL
syntax.
The name of the given column.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
Returns whether the given @column has a bound value in the current row.
This may not be the case through e.g. the `OPTIONAL { }` SPARQL syntax.
a %TRUE or %FALSE.
a `TrackerSparqlCursor`
column number to retrieve (first one is 0)
Iterates the cursor to the next result.
If the cursor was not started, it will point to the first result after
this call. This operation is completely synchronous and it may block,
see [method@SparqlCursor.next_async] for an asynchronous variant.
%FALSE if there are no more results or if an error is found, otherwise %TRUE.
a `TrackerSparqlCursor`
Optional [type@Gio.Cancellable]
Iterates the cursor asyncronously to the next result.
If the cursor was not started, it will point to the first result after
this operation completes.
In the period between this call and the corresponding
[method@SparqlCursor.next_finish] call, the other cursor methods
should not be used, nor their results trusted. The cursor should only
be iterated once at a time.
a `TrackerSparqlCursor`
Optional [type@Gio.Cancellable]
user-defined [type@Gio.AsyncReadyCallback] to be called when
asynchronous operation is finished.
user-defined data to be passed to @callback
Finishes the asynchronous iteration to the next result started with
[method@SparqlCursor.next_async].
%FALSE if there are no more results or if an error is found, otherwise %TRUE.
a `TrackerSparqlCursor`
a [type@Gio.AsyncResult] with the result of the operation
Resets the iterator to point back to the first result.
This function only works on cursors
from direct [class@SparqlConnection] objects and cannot work
reliably across all cursor types. Issue a different query to
obtain a new cursor.
a `TrackerSparqlCursor`
The [class@SparqlConnection] used to retrieve the results.
Number of columns available in the result set.
Error domain for Tracker Sparql. Errors in this domain will be from the
[error@SparqlError] enumeration. See [struct@GLib.Error] for more information on error
domains.
Subject is not in the domain of a property or
trying to set multiple values for a single valued
property.
Internal error.
There was no disk space available to perform the request.
The specified ontology wasn't found.
Problem encountered while opening the database.
Error parsing the SPARQL string.
Problem while executing the query.
Type constraint failed when trying to insert data.
Unknown class.
Unknown graph.
Unknown property.
Unsupported feature or method.
The ontology doesn't contain nrl:lastModified header
The property is not completely defined.
A soft/hard corruption was found in the database during operation.
If this error is obtained during regular operations with an existing [class@SparqlConnection],
the corruption was newly found. This event will be persistently recorded so that the
[func@SparqlConnection.new_async] constructor (or its synchronous variant) will
perform database repair attempts. If this error is obtained during one of those constructors, the
database could not be repaired automatically and data loss is unavoidable. It is left to the discretion
of the API user to set up the appropriate fallbacks in this situation, to replace the
database and recover from the error. See [ctor@SparqlConnection.new] documentation
for more information on corruption handling.
The total number of error codes.
`TrackerSparqlStatement` represents a prepared statement for a SPARQL query.
The SPARQL query will be internally compiled into the format that is most
optimal to execute the query many times. For connections created
through [ctor@SparqlConnection.new] that will be a
SQLite compiled statement.
The SPARQL query may contain parameterized variables expressed via the
`~` prefix in the SPARQL syntax (e.g. `~var`), these may happen anywhere
in the SPARQL where a literal or variable would typically happen. These
parameterized variables may be mapped to arbitrary values prior to
execution. The `TrackerSparqlStatement` may be reused for future
queries with different values.
The argument bindings may be changed through the [method@SparqlStatement.bind_int],
[method@SparqlStatement.bind_int], etc... family of functions. Those functions
receive a @name argument corresponding for the variable name in the SPARQL query
(eg. `"var"` for `~var`) and a value to map the variable to.
Once all arguments have a value, the query may be executed through
[method@SparqlStatement.execute_async] or [method@SparqlStatement.execute].
It is possible to use any `TrackerSparqlStatement` from other threads than
the one it was created from. However, binding values and executing the
statement must only happen from one thread at a time. It is possible to reuse
the `TrackerSparqlStatement` right after [method@SparqlStatement.execute_async]
was called, there is no need to wait for [method@SparqlStatement.execute_finish].
In some circumstances, it is possible that the query needs to be recompiled
from the SPARQL source. This will happen transparently.
Binds the boolean @value to the parameterized variable given by @name.
a `TrackerSparqlStatement`
variable name
value
Binds the [type@GLib.DateTime] @value to the parameterized variable given by @name.
a `TrackerSparqlStatement`
variable name
value
Binds the double @value to the parameterized variable given by @name.
a `TrackerSparqlStatement`
variable name
value
Binds the integer @value to the parameterized variable given by @name.
a `TrackerSparqlStatement`
variable name
value
Binds the @value to the parameterized variable given by @name, tagged
with the language defined by @langtag. The language tag should follow
[RFC 5646](https://www.rfc-editor.org/rfc/rfc5646.html). The parameter
will be represented as a [`rdf:langString`](rdf-ontology.html#rdf:langString).
a `TrackerSparqlStatement`
variable name
value
language tag
Binds the string @value to the parameterized variable given by @name.
a `TrackerSparqlStatement`
variable name
value
Clears all bindings.
a `TrackerSparqlStatement`
Executes the `SELECT` or `ASK` SPARQL query with the currently bound values.
This function also works for `DESCRIBE` and `CONSTRUCT` queries that
retrieve data from the triple store. These query forms that return
RDF data are however more useful together with [method@SparqlStatement.serialize_async].
This function should only be called on `TrackerSparqlStatement` objects
obtained through [method@SparqlConnection.query_statement] or
SELECT/CONSTRUCT/DESCRIBE statements loaded through
[method@SparqlConnection.load_statement_from_gresource].
An error will be raised if this method is called on a `INSERT` or `DELETE`
SPARQL query.
A `TrackerSparqlCursor` with the query results.
a `TrackerSparqlStatement`
Optional [type@Gio.Cancellable]
Executes asynchronously the `SELECT` or `ASK` SPARQL query with the currently bound values.
This function also works for `DESCRIBE` and `CONSTRUCT` queries that
retrieve data from the triple store. These query forms that return
RDF data are however more useful together with [method@SparqlStatement.serialize_async].
This function should only be called on `TrackerSparqlStatement` objects
obtained through [method@SparqlConnection.query_statement] or
SELECT/CONSTRUCT/DESCRIBE statements loaded through
[method@SparqlConnection.load_statement_from_gresource].
An error will be raised if this method is called on a `INSERT` or `DELETE`
SPARQL query.
a `TrackerSparqlStatement`
Optional [type@Gio.Cancellable]
user-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
user-defined data to be passed to @callback
Finishes the asynchronous operation started through
[method@SparqlStatement.execute_async].
A `TrackerSparqlCursor` with the query results.
a `TrackerSparqlStatement`
a [type@Gio.AsyncResult] with the result of the operation
Returns the [class@SparqlConnection] that this statement was created for.
The SPARQL connection of this statement.
a `TrackerSparqlStatement`
Returns the SPARQL string that this prepared statement holds.
The contained SPARQL query
a `TrackerSparqlStatement`
Serializes a `DESCRIBE` or `CONSTRUCT` query into the given RDF @format.
The query @stmt was created from must be either a `DESCRIBE` or `CONSTRUCT`
query, an error will be raised otherwise.
This is an asynchronous operation, @callback will be invoked when the
data is available for reading.
The SPARQL endpoint may not support the specified format, in that case
an error will be raised.
The @flags argument is reserved for future expansions, currently
#TRACKER_SERIALIZE_FLAGS_NONE must be passed.
a `TrackerSparqlStatement`
serialization flags
RDF format of the serialized data
Optional [type@Gio.Cancellable]
user-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
user-defined data to be passed to @callback
Finishes the asynchronous operation started through
[method@SparqlStatement.serialize_async].
a [class@Gio.InputStream] to read RDF content.
a `TrackerSparqlStatement`
a [type@Gio.AsyncResult] with the result of the operation
Executes the `INSERT`/`DELETE` SPARQL query series with the currently bound values.
This function should only be called on `TrackerSparqlStatement` objects
obtained through [method@SparqlConnection.update_statement] or
`INSERT`/`DELETE` statements loaded through
[method@SparqlConnection.load_statement_from_gresource].
An error will be raised if this method is called on
`SELECT`/`ASK`/`DESCRIBE`/`CONSTRUCT` SPARQL queries.
%TRUE if the update finished with no errors, %FALSE otherwise
a `TrackerSparqlStatement`
Optional [type@Gio.Cancellable]
Executes asynchronously the `INSERT`/`DELETE` SPARQL query series with the currently bound values.
This function should only be called on `TrackerSparqlStatement` objects
obtained through [method@SparqlConnection.update_statement] or
`INSERT`/`DELETE` statements loaded through
[method@SparqlConnection.load_statement_from_gresource].
An error will be raised if this method is called on
`SELECT`/`ASK`/`DESCRIBE`/`CONSTRUCT` SPARQL queries.
a `TrackerSparqlStatement`
Optional [type@Gio.Cancellable]
user-defined [type@Gio.AsyncReadyCallback] to be called when
the asynchronous operation is finished.
user-defined data to be passed to @callback
Finishes the asynchronous update started through
[method@SparqlStatement.update_async].
%TRUE if the update finished with no errors, %FALSE otherwise
a `TrackerSparqlStatement`
a [type@Gio.AsyncResult] with the result of the operation
The [class@SparqlConnection] the statement was created for.
SPARQL query stored in this statement.
Enumeration with the possible types of the cursor's cells
Unbound value type
Uri value type, rdfs:Resource
String value type, xsd:string or rdf:langString
Integer value type, xsd:integer
Double value type, xsd:double
Datetime value type, xsd:dateTime
Blank node value type
Boolean value type, xsd:boolean
Checks that the Tracker library in use is compatible with the given version.
Generally you would pass in the constants
[const@Tracker.MAJOR_VERSION], [const@Tracker.MINOR_VERSION], [const@Tracker.MICRO_VERSION]
as the three arguments to this function; that produces
a check that the library in use is compatible with
the version of Tracker the application or module was compiled
against.
Compatibility is defined by two things: first the version
of the running library is newer than the version
@required_major.@required_minor.@required_micro. Second
the running library must be binary compatible with the
version @required_major.@required_minor.@required_micro
(same major version.)
%NULL if the Tracker library is compatible with the
given version, or a string describing the version mismatch.
the required major version.
the required minor version.
the required micro version.
Escapes @literal so it is suitable for insertion in
SPARQL queries as string literals.
Manual construction of query strings based user input is best
avoided at all cost, use of #TrackerSparqlStatement is recommended
instead.
the escaped string
a string to escape
Escapes a string for use as a URI.
a newly-allocated string holding the result.
a string to be escaped, following the tracker sparql rules
Formats and escapes a string for use as a URI. This function takes variadic arguments.
a newly-allocated string holding the result.The returned string
should be freed with g_free() when no longer needed.
a standard printf() format string, but notice
<link linkend="string-precision">string precision pitfalls</link> documented in g_strdup_printf()
the parameters to insert into the format string
Formats and escapes a string for use as a URI. This function takes a `va_list`.
Similar to the standard C vsprintf() function but safer, since it
calculates the maximum space required and allocates memory to hold
the result.
a newly-allocated string holding the result.
a standard printf() format string, but notice
<link linkend="string-precision">string precision pitfalls</link> documented in g_strdup_printf()
the list of parameters to insert into the format string
Returns a path to the built-in Nepomuk ontologies.
a #GFile instance.
Creates a fresh UUID-based URN.
A newly generated UUID URN.