The EU's GDPR includes a "Right of explanation" in Recital 71.
"In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision."
The scope of this right is very broad:
"[a decision]...which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention"
So to comply with GDPR, which applies to any company doing business in the EU, whether B2B or B2C, you can continue to use algorithmic methods to make decisions that concern people, but there must be a human override for such decisions and there must be an explanation of how the decision was arrived at.
The full meaning of the above will be determined over the coming years by case law. However, it doesn't seem likely that supplying a complaining client with some Java code, or the weights and architecture of a trained neural network will satisfy this requirement.
With DARL you can achieve the same performance as with other kinds of AI, including deriving DARL code by Machine learning, but DARL code explains itself.
A piece of DARL code explicitly declares it's contents in a cut-down form of plain English. It's early days to know what a court might accept, but DARL has a much better chance of meeting this requirement.