vO.1.3 Release Notes

This release includes some minor bug-fixes, several new features and API changes described below. In particular, it adds compatibility with pandas version 1.3.

Blue Graph’s core


Updates to the PGFrame interface include:

  • Several minor bugfixes;

  • New from_ontology method of PGFrame allowing to import ontologies as property graphs (based on rdflib).

Backend support

In this release a collection of factory utilities for creating different backend-specific objects was added (see bluegraph.backends.utils).

For example, instead of doing the following:

from bluegraph.backends.networkx import NXMetricProcessor, NXCommunityDetector
from bluegraph.backends.graph_tool import GTPathFinder, GTComminityDetector

processor = NXMetricProcessor(input_pgframe)
finder = GTPathFinder(input_pgframe)
nx_detector =  NXCommunityDetector(input_pgframe)
gt_detector =  GTComminityDetector(input_pgframe)

(which includes a lot of backend-specific imports) users can also do:

from bluegraph.backends import create_analyzer

processor = create_analyzer("metric_processor", "network", input_pgframe)
finder = create_analyzer("path_finder", "graph_tool", input_pgframe)
nx_detector =  create_analyzer("community_detector", "networkx", input_pgframe)
gt_detector =  create_analyzer("community_detector", "graph_tool", input_pgframe)

The same holds for node embedders, for example:

embedded = create_node_embedder("stellargraph", "node2vec", edge_weight="weight", **kwargs)


In this release we have fixed the version of graph-tool to 2.37 due to the breaking changes in the new API of 2.4X (in particular, removal of B_min parameter from the interface of minimize_blockmodel_dl).


Neo4j-based analytics utils was updated to use the lastest Neo4j GDS 1.6.X, a couple of minor bugfixes to bluegraph.backends.neo4j.pgframe_to_neo4j were added.

Graph preprocessing with BlueGraph

Semantic property encoding

Added PCA-based dimensionality reduction as a part of SklearnPGEncoder. This allows adding an optional dimensionality reduction step as a part of preprocessing.

For example, the following snippet creates an encoder that processes node and edge properties of the input graph and further performs dimensionality reduction to 10 components for resulting node features and 3 components for edges features.

encoder = SklearnPGEncoder(
    node_properties=["nprop1", "nprop2", "nprop3"],
    edge_properties=["eprop1", "eprop2", "eprop3"],



Changes to the API of the embedding service were introduced:

  • The endpoint models/{model_id}/details/{component} is replaced by /models/{model_id}/{component}

  • The endpoint model/{model_id}/... is replaced by models/{model_id}/...

  • The endpoint model/{model_id}/similar-points is replaced by models/{model_id}/neighbors

  • The endpoint models/{model_id}/embedding returns {"vectors": [..., ..., ...] }

  • Added a POST endpoint to /models/{model_id}/embedding/ and /models/{model_id}/neighbors/ that allows to query existing points (not only to predict new ones). This endpoint is necessary, if the number of resources is large, so that the GET request uri explodes.