Communities

Kaspa Q&A
Kaspa Q&A
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Kaspa Q&A

Post History

50%
+0 −0
Kaspa Q&A Data availability in the vProgs design

For simplicity of description I assume below any interaction is a write interaction on all associated accounts. First it is important to emphasize the following - while A,B are each sovereign logi...

posted 6d ago by FreshAir28‭  ·  edited 6d ago by FreshAir28‭

Answer
#6: Post edited by user avatar FreshAir28‭ · 2025-11-18T17:57:41Z (6 days ago)
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two vProgs, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other vProgs **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in ZK proofs of the various required vProgs sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two vProgs, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other vProgs **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (up to some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a foreign computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in ZK proofs of the various required vProgs sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
#5: Post edited by user avatar FreshAir28‭ · 2025-11-18T17:56:57Z (6 days ago)
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two vProgs, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other vProgs **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in ZK proofs of the various required vProgs sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two vProgs, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other vProgs **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in ZK proofs of the various required vProgs sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
#4: Post edited by user avatar FreshAir28‭ · 2025-11-18T17:32:32Z (6 days ago)
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two logic zones, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other logic zones **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in a ZK proof sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two vProgs, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other vProgs **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in ZK proofs of the various required vProgs sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
#3: Post edited by user avatar FreshAir28‭ · 2025-11-18T17:31:07Z (6 days ago)
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two logic zones, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other logic zones **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data which was committed to in a ZK proof sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction , if required. The revealing is verified by L1, or the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two logic zones, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other logic zones **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data of accounts which were committed to in a ZK proof sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction, if required. The revealing of the data is verified by L1, and if it fails the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
#2: Post edited by user avatar FreshAir28‭ · 2025-11-18T17:28:06Z (6 days ago)
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • Now, a transaction composing two logic zones, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**).
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other logic zones **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data which was committed to in a ZK proof sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction , if required. The revealing is verified by L1, or the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
  • For simplicity of description I assume below any interaction is a write interaction on all associated accounts.
  • First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.
  • To answer the question plainly, a transaction composing two logic zones, **A, B,** will
  • only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**) at time of its sequencing.
  • This can be ensured in several ways:
  • 1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
  • 2) L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other logic zones **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned).
  • As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
  • 3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data which was committed to in a ZK proof sometime in the past. It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction , if required. The revealing is verified by L1, or the transaction is deemed illegal.
  • (3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.
  • First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
  • Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.
#1: Initial revision by user avatar FreshAir28‭ · 2025-11-18T17:25:58Z (6 days ago)
For simplicity of description I assume below any interaction is a write interaction on all associated accounts.

First it is important to emphasize the following - while **A,B** have are each sovereign logic zones, it is consensus dictated that nodes running **A** also at times store account data belonging to **B**, and conversely that **B** nodes store **A** accounts data. The L1 regulates and is fully aware of this foreign storage allotment - though it maintains obliviousness to the actual data contents.

Now, a transaction composing two logic zones, **A, B,** will
only be legal to begin with if **A** is ensured by L1 to have all the required data from **B** necessary to compute it (and vice versa for **B**).

This can be ensured in several ways: 
1) L1 knows that **A** was already storing the latest data of B on the accounts relevant to the transaction
2)  L1 knows that A does not have the latest data required, but does already store a sufficient "covering layer" in the past of this latest data that can enable it to derive it by computing past transactions (of **B**, and of others). Notice that due to dependencies of past transactions, it is very plausible this covering layer contains other accounts from **B**, as well as accounts from other logic zones **C,D,E** etc. Any states **A** calculated through this "climb up", are also to be stored by **A** for consecutive transactions (upto some point in the future where it will be pruned). 
As a sidenote, the climb up to recent data incurs a computational burden on **A**, which must itself be regulated by L1.
3) If **A** does not even have a full "covering layer", this layer can be patched up with "witness data" revealing plaintext data which was committed to in a ZK proof sometime in the past.  It is the responsibility of the transaction issuer to supply this data (and a revealing proof for it) within the transaction , if required. The revealing is verified by L1, or the transaction is deemed illegal.


(3) could potentially raise some eyebrows - it is clear that if this happens all the time, then transactions eat up much bandwidth and the system suffers greatly.

First, it is for this reason exactly that vProgs are described with the built in separation to accounts - with the intention that account data be reasonable in size.
Second, The guiding logic behind this scheme is an empiric claim on the nature of transaction usage - an account recently composed with, has considerably more chance to be composed with again in the near term, than an account composed with a month ago. In other words, at any time period most transactions involve the same relatively few "hot" accounts constantly touched on. As the hot accounts data only need be revealed once to follow through with it, (3) shouldn't cause a bottleneck for regular usage patterns.