Building a people centric next generation internet.
Once marked like that, the money will be "made available" within 7 days. Without that, it's 21 days.
So I expected it 7 days later. 9 days later, nothing. I pinged support, they said "you didn't mark it as shipped, so it takes 21 days".
I not so kindly informed them that I marked it as a service on the same day it was created, so they should get their act together. The next day, without response, the money was available.
What does available mean?
Paypal for payments: never again.
I made that available to clients, and one used it. Nice, easy transaction, got paid immediately.
Except PayPal keeps the money back for "security" reasons. Ok, so a day or two, who cares - I thought.
But it's not that easy.
First, when you use an external software to create the "invoice" in PayPal, you have to manually mark it as processed. You have a choice of adding some kind of postage information or mark it as a service, which I did immediately.
This is your COVID wake-up call:
It is 100 seconds to midnight
You need to build meson projects with your CI? I may have a docker image, bumped to meson 0.58 today.
You also need to build them for Android's NDK? Ok, that also works.
Someone other than me is using these, but I'm not sure who.
My approach is to complain to YT every time this happens. Would be nice if this idea gained momentum.
So YouTube wants me to verify my age for some videos. My account with YouTube is about as old as the site, which in Germany means, it's old enough to watch whatever YouTube allows with its content policies.
(The video in question is a sex toy review recommended as funny, not explicit in any other way.)
It occurs to me that this is US tech colonialism; they're trying to apply a kind of prudery standard that doesn't exist where I live. Sure, it's out of laziness, but still.
... one channel don't negatively impact the other channels.
And now you know why I post so little about what's going on with my projects, because when I do, it's a massive thread.
... run either lossy as a UDP connection, or lossless as a TCP connection, with some other modes added. I've done the design for this, but the implementation of it is what's up next.
After that is encryption. Encryption is a little different in this kind of context because most protocols assume a single link and a single concern (channel).
What we need to do is ensure that multiple links can authenticate as part of the same connection easily. And then we need to ensure that packets lost in...
... over and over again. That goes against the principle of a NAT, which tries to map multiple clients behind the NAT to a single IP address with multiple ports. All you're going to achieve that way is make the NAT run out of ports.
No, much better to re-use a single pierced connection. But that means one should allow different application concerns different independent channels.
The next step here is to make the channel characteristics configurable. That means each channel should be able to..
The next steps for this protocol stack won't be multi-link, though - that's something I'll add, but it doesn't have to be right now.
What's done (in the sense that there exists a test suite for it) is instead a multi-channel protocol. What this allows you to do is create independent channels over the same connection. The idea here is that in a p2p network, NAT piercing will make it more efficient to re-use a single established connection for multiple purposes than to try and pierce the NAT...
But since the paper will be published in some way or another, it's (going to be) public knowledge, so I'll apply the same principles in the #interpeer protocol stack.
Sure, those nodes may not end up flying... that's not the point, though. The same protocol approach also solves the real problem we e.g. have with video streams at the moment. Leave your home WiFi while chatting with your friends and step into LTE land, and it's up to the chat software to deal with that. Which some don't.
specifically, we're aiming to use it in flying routers onboard drones. Each link the router has available is some kind of distinct COTS wireless tech, such as LTE or WiFi.
Whenever a link is connected to an access point, its address can be added to the multi-link connection, and removed when it goes out of range.
Application protocols should be tunnelled through this multi-link setup. The aim is to provide greater reliability through failover.
And that's my day job at the moment.
... state machine examines the states of all the individual uniflow state machines and concludes whether the connection as a whole can be considered "established" or in any prior state.
We intended this as a tunneling protocol. There is a kind of circular reason here that it is based on UDP because we wanted to avoid TCP meltdown, but it also allows us to reason about these connection state machines, etc, etc.
Long story short, this is to be used in routers;
... communications from any specific implementation as in MPTCP or SCTP or wherever. And in the form of state machines, you can reason about the correctness of a protocol in the abstract.
So that's what we set out to do in the paper. It turns out, you need three state machines because an ingress uniflow needs some kind of messages being passed to "know" the other side is active, as does an "egress" uniflow. Of course they interact with each other as well. Finally, the connection...
... and the same for receiving packets.
So a "connection" is essentially two complementary uniflows; both have the same local and remote address and opposing directions. The local address is the destination in ingress, and the source in egress, etc.
With this view established, a multi-link connection is one where you have *at least* one active egress and one active ingress uniflow, but multiple of each type should be possible.
Taking this abstract view allows you from separating multi-link...
That's why the TCP extension for doing this with multiple addresses at each node is called Multi-path TCP.
QUIC is aware that being based on UDP it is no longer possible to have this, so it talks about "uniflows", which I think is a good term so we adopted.
A uniflow is technically just a source and destination address, and a direction - ingress (incoming) or egress (outgoing). All you can say for certain is that packets sent along one uniflow are separate from packets sent along another, ...
Why three? Well, if you base a protocol on datagrams (UDP, or directly on IP packets), there is no such thing as a "connection", and without TCP's flow control, there is also no such thing as a "flow".
TCP manages flows by preferring to send packets to the same destination across the same physical link, and then sending response packets in the opposite direction. Since all nodes do that, you effectively have established a path for your packets to traverse bidirectionally.
Building a people centric next generation internet.
A private instance for the Finkhäuser family.