Discussion:
ANN: Dogelog Player 1.2.6 (Segmented Fileaccess)
Add Reply
Mild Shock
2025-01-23 09:20:37 UTC
Reply
Permalink
Dear All,

We are happy to announce a new edition of the Dogelog player:

- Segmented Fileaccess:
Due to UTF-8 problems, files were read completely into
memory in order to then allow stream access. In order to be
able to use HTTP Transfer-Encoding: chunked, the UTF-8 problem
has been solved and only individual blocks will be
read into memory.

- New library(crypto):
Segmented file access is supported not only for text files
but also for binary files. This allows hash digests to be
calculated in a memory-saving manner. The library implements
a binding to the SHA-256 algorithms of JavaScript,
Python and Java.

- Backtracking Datastructures:
The libraries library(hash) and library(tree), which
were only realized non-backtracking so far, have been
expanded to include further predicates that allow
backtracking updates. library(hash) has been completely
rewritten to open lists and only provides partial persistence,
whereas library(tree) provides full persistence.

For more details: Dogelog

Have Fun!

Jan Burse, http://www.xlog.ch/ , 23.01.2025
Mild Shock
2025-01-23 15:55:39 UTC
Reply
Permalink
The examined Prolog Notebook projects SWI for Sharing,
Ciao Notebooks and Tau Prolog all offer some Prolog
text coloring. We conducted an experiment whether we
can utilize some synergies in Dogelog Player to provide
a colorizer without using any 3rd party library.

Since the batch processing slowed down the experience
of notebooks we changed our mind and started developping
a dynamic filter. Passive Logic Documents (PLD) are turned
into Active Logic Documents (ALD) via an asynchronous
call into a 100% in Prolog realized colorizer library.

See also:

Dogelog Notebooks with Prolog Coloring
https://x.com/dogelogch/status/1882455350739337230

Dogelog Notebooks with Prolog Coloring
https://www.facebook.com/groups/dogelog
Post by Mild Shock
Dear All,
  Due to UTF-8 problems, files were read completely into
memory in order to then allow stream access.  In order to be
able to use HTTP Transfer-Encoding: chunked, the UTF-8 problem
has been solved and only individual blocks will be
read into memory.
  Segmented file access is supported not only for text files
but also for binary files.  This allows hash digests to be
calculated in a memory-saving manner.  The library implements
a binding to the SHA-256 algorithms of JavaScript,
Python and Java.
  The libraries library(hash) and library(tree), which
were only realized non-backtracking so far, have been
expanded to include further predicates that allow
backtracking updates.  library(hash) has been completely
rewritten to open lists and only provides partial persistence,
whereas library(tree) provides full persistence.
For more details: Dogelog
Have Fun!
Jan Burse, http://www.xlog.ch/ , 23.01.2025
Mild Shock
2025-02-06 15:30:58 UTC
Reply
Permalink
Among algebraic approaches to logic we find binary
decision diagrams. The approach is syntactical and not
semantical, since it focuses on a certain formulas to
represent truth tables. Unlike matrix like disjunctive
or conjunctive normal forms, they give tree
like normal forms.

Donald Knuth also popularized zero-suppressed
decision diagrams, a binary decision diagram variant
developed by Shin-Ichi Minato. Their appeal results
from "jump" where nodes are omitted. We focused more
on the cost of negation and arrived at zero-less
decision diagram. They might have different niche
application areas.

See also:

Zero-Less Decision Diagrams in Dogelog Player
https://x.com/dogelogch/status/1887521934348374204

Zero-Less Decision Diagrams in Dogelog Player
https://www.facebook.com/groups/dogelog
Post by Mild Shock
The examined Prolog Notebook projects SWI for Sharing,
Ciao Notebooks and Tau Prolog all offer some Prolog
text coloring. We conducted an experiment whether we
can utilize some synergies in Dogelog Player to provide
a colorizer without using any 3rd party library.
Since the batch processing slowed down the experience
of notebooks we changed our mind and started developping
a dynamic filter. Passive Logic Documents (PLD) are turned
into Active Logic Documents (ALD) via an asynchronous
call into a 100% in Prolog realized colorizer library.
Dogelog Notebooks with Prolog Coloring
https://x.com/dogelogch/status/1882455350739337230
Dogelog Notebooks with Prolog Coloring
https://www.facebook.com/groups/dogelog
Post by Mild Shock
Dear All,
   Due to UTF-8 problems, files were read completely into
memory in order to then allow stream access.  In order to be
able to use HTTP Transfer-Encoding: chunked, the UTF-8 problem
has been solved and only individual blocks will be
read into memory.
   Segmented file access is supported not only for text files
but also for binary files.  This allows hash digests to be
calculated in a memory-saving manner.  The library implements
a binding to the SHA-256 algorithms of JavaScript,
Python and Java.
   The libraries library(hash) and library(tree), which
were only realized non-backtracking so far, have been
expanded to include further predicates that allow
backtracking updates.  library(hash) has been completely
rewritten to open lists and only provides partial persistence,
whereas library(tree) provides full persistence.
For more details: Dogelog
Have Fun!
Jan Burse, http://www.xlog.ch/ , 23.01.2025
Mild Shock
2025-02-11 11:15:20 UTC
Reply
Permalink
Dogelog Player is a Prolog system for JavaScript,
Python and Java. It is 100% written in Prolog itself.
We present an enhancement to DCG translation. It uses
unification spilling to reduce the number of needed
unify (=)/2 calls and intermediate variables.

Unification spilling can be readily implemented by
performing unification (=)/2 during DCG translation.
Careful spilling without breaking steadfastness gave
us a 10% — 25% speed increase not only for the calculator
example but also for the Albufeira transpiler.

See also:

DCG Translation with Unification Spilling
https://x.com/dogelogch/status/1889270444647182542

DCG Translation with Unification Spilling
https://www.facebook.com/groups/dogelog
Mild Shock
2025-02-13 18:12:22 UTC
Reply
Permalink
An autoencoder learns two functions: an encoding
function that transforms the input data, and a
decoding function that recreates the input data
from the encoded representation. We approach
autoencoders via our already developed SAT Learning
in the Prolog programming language.

Switching from a marginal maximizer to a conditional
maximizer gives better results but also requires a
more costly and slower optimizer. Maximum entropy
methods were already suggest by Peter Cheeseman in
1987. Mostlikely flawed since there is not yet a
feedback loop from the decoder to the encoder.

Maximum Entropy in SAT Autoencoding
https://x.com/dogelogch/status/1890093860782764409

Maximum Entropy in SAT Autoencoding
https://www.facebook.com/groups/dogelog
Post by Mild Shock
Dogelog Player is a Prolog system for JavaScript,
Python and Java. It is 100% written in Prolog itself.
We present an enhancement to DCG translation. It uses
unification spilling to reduce the number of needed
unify (=)/2 calls and intermediate variables.
Unification spilling can be readily implemented by
performing unification (=)/2 during DCG translation.
Careful spilling without breaking steadfastness gave
us a 10% — 25% speed increase not only for the calculator
example but also for the Albufeira transpiler.
DCG Translation with Unification Spilling
https://x.com/dogelogch/status/1889270444647182542
DCG Translation with Unification Spilling
https://www.facebook.com/groups/dogelog
Mild Shock
2025-02-20 10:19:34 UTC
Reply
Permalink
We made our remark reality that a binary decision
tree can be directly created from the data. Starting
from adaptive trees we built a new aggregate that can
perform the statistics for a Bayes Classifier using
the majority rule. We only use Prolog code!

The adaptive tree can be used like a bitwise trie,
and allows us to compute some statistics in one pass.
From this statistics we can then derive a decision
tree using a majority rule. The entropy of the computed
output will be inside an 1/2 bit interval of the
sample output entropy.

See also:

Bayes Classifier for SAT Learning
https://x.com/dogelogch/status/1892517071730135467

Bayes Classifier for SAT Learning
https://www.facebook.com/groups/dogelog
Post by Mild Shock
An autoencoder learns two functions: an encoding
function that transforms the input data, and a
decoding function that recreates the input data
from the encoded representation. We approach
autoencoders via our already developed SAT Learning
in the Prolog programming language.
Switching from a marginal maximizer to a conditional
maximizer gives better results but also requires a
more costly and slower optimizer. Maximum entropy
methods were already suggest by Peter Cheeseman in
1987. Mostlikely flawed since there is not yet a
feedback loop from the decoder to the encoder.
Maximum Entropy in SAT Autoencoding
https://x.com/dogelogch/status/1890093860782764409
Maximum Entropy in SAT Autoencoding
https://www.facebook.com/groups/dogelog
Post by Mild Shock
Dogelog Player is a Prolog system for JavaScript,
Python and Java. It is 100% written in Prolog itself.
We present an enhancement to DCG translation. It uses
unification spilling to reduce the number of needed
unify (=)/2 calls and intermediate variables.
Unification spilling can be readily implemented by
performing unification (=)/2 during DCG translation.
Careful spilling without breaking steadfastness gave
us a 10% — 25% speed increase not only for the calculator
example but also for the Albufeira transpiler.
DCG Translation with Unification Spilling
https://x.com/dogelogch/status/1889270444647182542
DCG Translation with Unification Spilling
https://www.facebook.com/groups/dogelog
Loading...