IPFS Collaborative Notebook for Research

IPFS Collaborative Notebook for Research

What's in This Repo?

We use this repo in two ways:

  • Issues to track several kinds of discussion on topics related with Research and IPFS, random ideas and proposals for new systems or features that don't fall on a specific repo yet. All the discussion happens in the issues.
  • OPEN_PROBLEMS list and unpack the currently known Open Problems for IPFS.

Disclaimer: While we work hard to document our work as it progresses, research progress may not be fully reflected here for some time, or may be worked out out-of-band.

Request for Proposals

Some of our Open Problems have open RFPs. For all, we welcome any collaborations (potentially leading to new constructions, discoveries and publications). Please reach out at [email protected].

Funding

Protocol Labs runs an RFP (Request For Proposals) Program with the goal of funding individuals and groups to come up with novel solutions to the Open Problems found in this and other repos. If interested, please follow the link to check the active RFPs.

Related research repos

Contribute

Feel free to join in. All welcome. Open an issue!

This repository falls under the IPFS Code of Conduct.

License

MIT

Owner
IPFS
A peer-to-peer hypermedia protocol
IPFS
Comments
  • npm on IPFS

    npm on IPFS

    • [x] ipfs-blob-store
    • [x] ipfs-daemon-ctrl control whatever daemon is installed
    • [x] plug in ipfs-blob-store to reginabox
    • [ ] figure if we can 302 publish
    • [x] mirror the registry
    • [x] bundle it all into one module
    • [x] make physical nodes

    cc @bengl

  • idea: support for transactional groups of writes

    idea: support for transactional groups of writes

    I had a brief discussion with @whyrusleeping and wanted to start the ball rolling on a public discussion here.

    Motivation: The ability to implement an ACID database or other transactional system in a highly concurrent environment. In my use case with Peergos I am imagining many users concurrently writing to the same IPFS daemon. This means that the GC will essentially need to be called at random relative to each thread. Each user will be building up trees of objects using object.patch, before eventually committing (pinning).

    Implementation: What I had in mind was an optional parameter to all IPFS write commands which was a transaction ID (and maybe a timeout as well). Then all writes are tagged with this temporary ID within a transaction. IPFS just keeps a map from transaction ID to a set of hashes, and these form new object roots during GC. Then when a transaction is complete you could call a completeTransaction(tid) which dumps the mapping (or let it time out). This gives you guarantees that an object you've just written won't be GC'd regardless of the GC activity before you are finished with it.

  • Censorship resistance, especially in China

    Censorship resistance, especially in China

    In China, IPFS is unusable because of the GFW. What does GFW do:

    • Block all default bootstrap nodes
    • Set up a node and start collecting ips to ban

    What can we do to make it censorship-resistant ?

  • "Multi" Org + Protocol Suite Name

    We need an org name for the "multi" protocol family:

    • multihash
    • multiaddr
    • multikey
    • multicodec
    • multistream
    • multibase

    The possibilities that come to mind:

    • multiprotocols
    • multis
    • multicodes
    • multisuite
    • multiprotos
    • multiformats

    This word should fit these sentences:

    • Multihash and Multiaddr are both [ part of ] ________ .
    • [The] _________ [ is / are ] a collection of formats or protocols that use efficient self description to provide interoperability and cryptographic agility.
    • [The] ________ [ is / are ] so useful.
    • If you design it as a ______ system, then you're not locked in.
    • ________ system
    • https://github.com/______
    • https://_______.io

    Try saying them out loud.

  • Pinning Service API

    Pinning Service API

    A Pinning Service is a service that accepts hashes from a user and will host the associated hash, i.e. Pinata, Infura, et al.

    The rationale behind defining a pinning service API is to have a baseline functionality and interface that can be provided by these services so that tools can be built on top of a common base of functionality.

    ~~Draft pinning service api: https://app.swaggerhub.com/apis/lanzafame/ipfs-pinning-service/v0.0.0~~

    Latest draft: https://app.swaggerhub.com/apis/lanzafame/ipfs-pinning-service/0.0.1

    //cc @obo20 @MichaelMure

    Anyone who knows someone running an IPFS pinning service, please tag them here, thanks. @parkan

  • Introduce Ed25519 public key IPFS identities

    Introduce Ed25519 public key IPFS identities

    At the moment IPFS identities are either SHA256 hashes of RSA public keys, or SHA256 hashes of Ed25519 public keys. I propose introducing a new type of IPFS identity (by adding a new multihash codec) that are simply Ed25519 public keys (not hashed). This is for several reasons:

    1. Ed25519 public keys are 256 bits, the same size as SHA256 hashes. So identities remain short.
    2. IPFS publishes public keys to the libp2p DHT. These DHT public key records take up resources and need to be maintained otherwise they expire.
    3. The process of broadcasting and retrieving these public keys is unnecessary complexity, and unnecessarily adds brittleness in the system.
    4. In the context of OpenBazaar, we want to maximise buyer privacy and protect buyers against spam/DoS attacks. A radical way to protect buyers is simply to never publicly broadcast their peer id (i.e. only share it with vendors they buy stuff from). Having the peer id be the Ed25519 public key would remove the need to publish the peer id to the DHT.

    Thoughts?

    cc @jackkleeman

  • ipfs-cluster - tool to coordinate between nodes

    ipfs-cluster - tool to coordinate between nodes

    It is clear we will need a tool / protocol on top of IPFS to coordinate IPFS nodes together. This issue will track design goals, constraints, proposals, and the progress.

    What to Coordinate

    Things worth coordinating between IPFS nodes:

    • collaborative pin sets -- back up large pin sets together, to achieve redundancy and capacity constraints (including RAID-style modes).
    • authentication graphs -- trust models, like PKIs or hierarchical auth of control.
    • bitswap guilds -- the ability to band together into efficient data trade networks
    • application servers -- afford redundancy guarantees to hosted protocols / APIs

    and more (expand this!)

    Consensus

    Many of these require consensus, and thus we'll likely bundle a simple (read: FAST!) consensus protocol with ipfs-cluster. this could be RAFT (etcd) or Paxos, and does not require byzantine consensus. Though having byzantine consensus would be useful for massive untrusted clusters-- though this approaches Filecoin and is a very different use case altogether.

    cluster == a virtualized ipfs node

    One goal is to represent a virtualized IPFS node sharded across other nodes. This makes for a very nice modular architecture where one can plug ipfs nodes into clusters, and clusters into larger clusters (hierarchies). This makes cluster a bit harder to design, but much, much more useful. Backing up of massive data (like all of archive.org or all of wikimedia, or all scientific data ever produced) would thus become orders of magnitude simpler to reason about.

    The general idea here is to make ipfs-cluster provide an API that matches the standard ipfs node API, (i.e. with an identity, being able to be connected to, and providing the ipfs core methods).

  • Tutorial: How to build an Collaborative Editing Application with IPFS using CRDT

    Tutorial: How to build an Collaborative Editing Application with IPFS using CRDT

    @pgte just released the y-ipfs-connector and a Video Tutorial explaining how to use it!

    image

    The video is very fun to watch and impressive how it requires so few lines of code to get it all set up!

    Video at: https://www.youtube.com/watch?v=-kdx8rJd8rQ

    Awesome work @pgte ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ๐Ÿ‘๐Ÿฝ

  • IPFS WebComponents

    IPFS WebComponents

    So, i've been doing a fair amount of research and experimentation, and i think i've come to a very nice point.

    Instead of using the full-blown webcomponent stack, which seems to be unstable, hardly supported by browsers, and heavily tied-into different frameworks (polymer et al). Here i'm only using the custom element part. This gives us css namespacing and html imports, and not much more. Still enough to provide a demo of what is possible.

    The biggest part is the JS imports over ipfs, which i think fits very well into the model we're envisioning. Please check out krl/ipfs-import for a demo and readme!

    @jbenet @diasdavid (feel free to ping more ppl, not sure of all web github handles)

  • S3-backed IPFS

    S3-backed IPFS

    I'm keen to adopt IPFS for some projects I'm working on, but a key requirement for me is reliable, scalable, and cheap storage. Normally I'd use S3, but it appears that it isn't possible to back IPFS with S3 at the current time?

    @VictorBjelkholm @flyingzumwalt @lgierth

  • Collab with/support the people putting IIIF on IPFS

    Collab with/support the people putting IIIF on IPFS

    The work to put IIIF on IPFS (ie. go-iiif) could open the gates to all the big cultural institutions (British Library, Bibliothรจque nationale de France, the Getty, all the big universities...) storing and serving all their image collections from IPFS. That would be huge for IPFS.

    Some of his work:

    • https://github.com/thisisaaronland/go-iiif
    • https://github.com/atomotic/go-iiif-docker

    People

    • @edsilv
    • @thisisaaronland
    • @atomotic

    I had a conversation with @edsilv on the Internet Archive Slack: https://gist.github.com/flyingzumwalt/02fbf076fbe778b55c66ae3d6bef8927

    @diasdavid this is relevant to your presentation at the Web Archiving Conference. Ed mentioned it in the slack conversation.

  • Use crdt for the MFS?

    Use crdt for the MFS?

    @Jorropo and I discussed shortly the idea to use crdt in the MFS.

    The idea is to avoid storing directories as trees in favor of the highly concurrent, mutable representation of files inside the MFS:

    Using the current implementation for IPFS-Cluster each file would be added as individual pin and path would be stored inside the name field. Folders would exist just because there are files for them (git style).

    This allows to have multiple writing, rewriting, moving, renaming operation happening in a concurrent fashion - and would not even be limited to a single node: Like in ipfs-cluster multiple nodes could be allowed to write to the same pinset - by trust. So a user could link up multiple nodes and have them share the same MFS.

    Analog to creating an archive, the user could choose to "freeze" a folder (and it's file/folder) structure. This would need to create a partial lock (e.g. via a hash from to the directory path) on the current section of the MFS locally โ€“ to block any further operations. Then all files and folders would be linked like they are currently represented by the pinset to a UnixFS CID and added as regular pin.

    Additionally, the "frozen" pin of the folder could be shown as version with a timestamp in the gui, when showing the folder.

    Rationale

    Apart from already mentioned main advantages of reduced overhead for concurrent access to the MFS, the ability to sync multiple clients in a distributed fashion on a single MFS (view) etc. it would also strictly seperate a "CID" representation of a folder, and an MFS representation of a folder.

    I talked with various people with IPFS while they did their first steps, and they had a hard time understanding the CID concept as a immutable data structure.

    The default assumption is, that a CID identifies a common point in the network where you provide the most recent version of a content (what actually an IPNS does). So they are very suppried when they create a folder for a friend share the CID with them and put a new file into the folder and "the new file is not appearing" on the "friend's view" of their folder.

    IPNS integration in IPFS Gui

    I think it would also allow a more easy IPNS integration in the GUI. I think the UX would be much better if they can see "their folder" in all of their clients and then "freeze" the folder into a CID, and have a way to move the IPNS entry from the last version to the new version of the folder - like with a button "update IPNS" next to the version.

  • RFC Custom

    RFC Custom "CID" to define blockchain smart contract interaction and value retrieval

    Hi,

    I'm not entirely sure what an appropriate title is for this. We'll probably figure something out along discussing it here. Also, do note that everything in this proposal is a "nice to have". None of it is required for some future feature that I'll describe in the IPNS usecase but it will certainly make it a lot easier!

    Rationale

    This proposal aims to provide a "standard" way to get a value from an EVM compatible blockchain. It only describes the message format that would allow another user/application to interpret and use to to get a value from said chain. It does not describe how to make a connection to a given blockchain but the subject is touched in the IPNS usecase.

    For this to work i'd like to have a new CID (actually a new multicodec) to make it easily detectable that a given CID with that codec is intented to be a file that describes in a defined format how to get a value from a blockchain.

    Proposing dag-bcv (bcv = blockchainvalue)

    Now the format itself in dag-language isn't clear to me yet so i'll just post it as json for the time being.

    {
      "blockchainID": 10,
      "contractAccress": "0x...",
      "getFunction": {
        "functionName": "getValue",
        "args": [{
          "parameter" : "address",
          "value": "0x..."
        }, {
          "parameter" : "uint256",
          "value": 01234565789
        }],
        "returnValue": "uint256"
      }
    }
    

    I'll go over each property and it's intended meaning.

    blockchainID

    This is the blockchain ID as it's known on https://chainlist.org/. It are these id's that you also need in - for example - metamask so it seems to make sense to rely on data from them.

    contractAccress

    This would be the deployed smart contract from which a value is going to be requested.

    getFunction

    This is an object that is going to describe the actual function call that needs to be done on the blockchain along with the arguments that would need to be passed in.

    functionName

    The function to be executed on the given smart contract address (contractAccress).

    args

    The type of arguments, and in that order. This is an array of objects where each object will have a parameter (the type of argument that is expected) and a value which is the actual value that needs to be passed as function argument.

    returnValue

    This described the type of return value that the caller can expect.

    How the new CID would look like

    Disclaimer: i'm not sure about this at all. This just "seems" like the shortest and smartest route but i might be completely wrong here. This new CID would have dag-bcv as opposed to dag-pb. This CID would not be done computationally (like hashing) but would be constructed based on another CIDv1. Say you add the json CID from the example above using just the normal regular ipfs add command. From this resulting CID we take only the sha-2-256 part. Then we compose a new CID (with dag-bcv as codec). In terms of protocol logic it is then forced that only a sha-2-256 of a CIDv1 can be used as the sha-2-256 part of this dag-bcv cid.

    A downside of this approach is that any cid could potentially be a dag-bcv encoded one while the data of that cid might not match the format we described. So this might not be the most ideal solution? If it's not, I'll need some help in understanding what the ideal solution is :)

    An example smart contract

    As an example contract you can look at this one: https://polygonscan.com/address/0x41ec72b8b36269b85e584d7b0187067a6cc1a04d#code I made that specifically to store key -> values pairs where they are unique on a per wallet address basis. So each individual wallet can store the key "settings" which is then mapped to a uint256 value. Note that the key is hashed too, the string version is never stored.

    I am using this in one place to only store the sha-256 hash part of a CID which conveniently fits in one uint256 slot. That allows me to get "mutable" content on two systems (IPFS and Blockchain) that are essential immutable. Again note that there isn't a single use of IPNS in this. More on that later.

    If i were to use that smart contract as a basis then the above json description file would looks as follows:

    {
      "blockchainID": 137,
      "contractAccress": "0x41ec72b8b36269b85e584d7b0187067a6cc1a04d",
      "getFunction": {
        "functionName": "getValue",
        "args": [{
          "parameter" : "string",
          "value": "settings"
        }, {
          "parameter" : "address",
          "value": "0xf03214714ddc99856eb9301d2c945195974064da"
        }],
        "returnValue": "uint256"
      }
    }
    

    Now if there were to be a value with the key settings added by the wallet 0xf03214714ddc99856eb9301d2c945195974064da then you'd get the value belonging with that key+address combination . That value would in this very specific case be a sha-2-256 hash from a CID. Depending on the application (on the place where i use this contract, this is enough) one could compose a CID from this information. A very important note here. This is just how "i" use this for my own purposes. The fact that i can translate a sha-2-256 to a CID is only because i make sure the data i expect is in the contract. This doesn't have to be the case.

    IPNS usecase

    None of the above was specific to IPFS. Having the above would have no effect of the way IPFS works. This potential usecase would change that very significantly.

    Assume this new CID with dag-bcv is in place. This opens up a whole new possibility for IPNS. If the IPNS logic were to be made aware of the gad-bcv logic then IPFS could internally do the RPC call to the blockchain defined in the above mentioned json blob. IPFS could then fetch the value from that blockchain. If that value is - in terms of protocol logic- forced to be a sha-2-256 hash from a certain CID format then IPFS know - by virtue of the protocol - how to reconstruct a valid CID from the value gathered from the blockchain smart contract.

    It could have very significant advantages for IPNS too. To name some of them:

    1. The whole concept of TTL can be ignored. The latest record is the one you fetch from a smart contract.
    2. Updating a IPNS record becomes a hash update on the blockchain. While this is a transaction (and thus cost money), it means we can rely on the blockchain for storing the value. In other terms, there would be no single point of failure in terms of relying on the original IPFS publisher to be online. That being said, we do have a new single point of failure in the blockchain RPC address. Still a win imho.
    3. IPNS resolving should be significantly faster. 3.1 Note. The first time an entry is looked up it would be 2 requests. One to get the json blob from above and one to get the data from the blockchain. This could mean the first lookup is slow. 3.2 Then again. Currently IPNS resolving can take minutes so this new approach is quite likely to be faster in the majority of cases.
    4. Combines the best of IPFS with the best of blockchain! Blockchain has a hard responsibility to be online, be immutable and - essentially - be a key -> value store. This is exactly what maps perfectly for IPNS use. And on top of that the above spec is blockchain agnostic as long as it's EVM compatible.
    5. Fully transparent to ipfs:// and ipns:// once implemented. Which means this could be used in dnslink too with 0 changes to that spec!

    Closing notes

    1. Why making this spec proposal? I've had this idea for some time and slowly started thinking about how to implement it. So i thought it might be valuable - or at least wise - to take some hours and document it somewhere. Here seems to be a very suitable place for that.
    2. The above has a lot of assumptions! It definitely needs more thought to iron out how it would work exactly.
    3. I quite like the IPNS idea but i hate it's current limitations. It's unusable in my opinion. The above solves it, or so i think :)
    4. I'm not set at all on my dag-bcv idea with a derived CID approach... It might be a garbage idea. Regardless, a new CID would be rather essential so alternatively a native IPLD approach? (how does that look like? IPLD is still like a black box to me, even after reading lots about it).
    5. This proposal (or more like a braindump idea) does add a potential point of costs (money wise) for IPFS with regards to IPNS. Each update to an IPNS record would be on the blockchain and that will cost you. Now there's dirt cheap chains like polygon where adding a value like a sha-2-256 hash + it's key to access it by is like 1 cent or so. So we're not talking about large sums. Also the format how i set it up allows for using any EVM compatible chain. Even the filecoin chain could be used once it supports EVM compatibility with it's upcoming FVM!

    I'm very eager to hear your opinion on this!

  • [braindump] Transparent file encryption/decryption

    [braindump] Transparent file encryption/decryption

    Hi,

    I wanted to add this to #448 but it seemed like only one piece of the puzzle for that braindump. Therefore a new one for just transparent file encryption/decryption.

    The goal is to have at least transparent file decryption (encryption is desired too). Programs using IPFS should not need to worry about encryption keys in arguments for commands. That would potentially alow attack vectors as the key would be part of the command. Hence my initial idea (posted as a comment in #448) is a bit bad in this regard but i'm just adding it below for completeness sake.

    Idea 1 (not the best one)

    Implement it in IPFS commands. So that you get something like, for example, a ipfs add <file> --encryption-key <aes key>. Likewise for getting a file. A benefit here is that you can also call this construct with the web API making the encryption fully transparent for API users (they only need the AES key, the IPFS client does the encryption/decryption)

    Idea 2

    New ipfs command: ipfs add-denc-jwt <cid> <json file>

    denc stands for decryption-encryption.

    Regarding the jwt in there. I'd just go for a standard format for describing files with encryption and decryption keys. JWT seems like a nice standard so i'd probably use that, but it can be something else too.

    The command allows one to specify a <cid> and a JWT file used on that CID. The files added via this command are only locally known to the local IPFS node. The JWT file is on the user's pc and never leaves it.

    How this is added is up for debate. It could just be <cid>-jwt.json files or it could be an internal SQLite database maintaining a mapping between CID files and JWT files. It would be my preference to just have files and stat them for existence.

    Transparent decryption

    Once a JWT file is added, the internal IPFS commands should take that JWT into account when a given CID is fetched. So say for example this CID is encrypted: QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T [1] and a JWT is provided for that file too. Then a command like ipfs cat QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T should check if QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T-jwt.json exists and apply whatever decryption it describes. The result of ipfs cat QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T in this case should be the decrypted content.

    If there is no QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T-jwt.json then the content is still shown as is. Thus just the encrypted useless content.

    Again note that the jwt file is never added to the IPFS network and isn't used in any communication that lieave the pc. Not bitswap either. The file should only be used on commands that are going to present the data to the user. So like ipfs cat and a couple others.

    Transparent encryption

    For encryption there are a couple possible ways i can imagine for it to work out.

    1. Add an extra argument to ipfs add that takes in a jwt.json file. Something like: ipfs add <my super secret file> --denc-jwt <jwt file>. This would not add the jwt file to the local IPFS data, the user still has to do that manually with ipfs add-denc-jwt <cid> <json file>
    2. Alternatively there could be a seperate independent application that does the encryption step. For example call it ipfs-encrypt which would then be called like: ipfs-encrypt <my file to encrypt> --denc-jwt <jwt file> | ipfs add In this hypothetical command it would first encrypt a file and then add that encrypted file to ipfs using the vanilla ipfs add command.

    I'm happy to receive any feedback you folks have!

    P.S. I'd be very much open to do a call with interested parties to make up a design for this. CC @Stebalien @aschmahmann

    [1] It's big buck bunny and not encrypted, but just for the example of it.

  • [braindump] How to improve the mounting of go-ipfs as a regular filesystem

    [braindump] How to improve the mounting of go-ipfs as a regular filesystem

    Hi,

    Just the other day i had a very interesting discussion with @Jorropo and @RubenKelevra on discord about improving mounting of IPFS folders on your system natively.

    This is a large braindump-like post. Any feedback you can provide is very much appreciated! We, in the discussion, all have our own very different reasons for wanting this so i hope i'm writing this down in the most appropiate and complete way. If not, please feel free to tell me what i should add.

    The current limitations

    1. The mounting is very limited in terms of metadata properties (UnixFS v1.5 is supposedly fixing this?)
    2. If you have, in your own network, a server that runs IPFS, you cannot easily mount IPFS folders from there on another PC
    3. If you want a "cloud drive" on IPFS, say akin to OneDrive/Dropbox/..., then you currently need to have all your data publicly and unencrypted to even remotely make it possible.
    4. (yes this was a reason) You cannot boot from IPFS
    5. File permissions

    Wishes These are just the ideas we had. There likely is a whole lot more.

    1. Mount IPFS folder over network
    2. Make it possible to boot from IPFS
    3. Have file encryption support, making a dropbox like service with private data possible
    4. Saturate network card speed for local network traffic. If you have a 1Gbit network and your IPFS node has the data you want locally (on it's host storage) then the network speed should be 1Gbit or close to the theoretical limit. Protocols like SMB and NFS struggle with this too.
    5. Allow to browse NFS files on the web too, again like dropbox. This means securing your files (aka, encryption). The best example for this in the IPFS world is probably Peergos

    There are much more limitations and wishes when you keep brainstorming about this. Those are about the most prominent ones we discussed. So, let's solve these! :) I'm going to ignore UnixFS v1.5 for the rest of this post as i'm assuming that to be in place for any of the future solutions.

    Potential solutions

    Disclaimer: there is no easy solution. Anything you can imagine is at the very best a huge task and at the very worst a monstrous task.

    NFS server

    A potential solution to this would be to write an NFS server application that can expose IPFS data via NFS. Internally the verver would use the IPFS API to get all the data it needs. Pros

    • Make use of NFS clients (any) to mount IPFS shares hosted by an NFS server.
    • Allows you to boot from IPFS
    • Wide support (windows, mac, linux and even android)
    • Lots of server applications out there to take inspiration from

    Cons

    • A very old protocol with a lot of legacy
    • Tremendously huge to implement
    • Difficult to get performant (in terms of network throughput)
    • Lacks encryption though NFS over TLS is possible.

    Personally, i like the concept of NFS but not the complexity. It has grown over the decades and isn't of this time anymore. Perhaps it's time for something new?

    Invent a new network protocol

    In other projects this would sound insane. In IPFS, where the whole protocol is reinvented, this doesn't sound so strange to me. This definitely is a huge task on it's own but it does allow to make something extensible and for today's desires. The following is again a braidump list of concepts it should support, it's far from anywhere near complete. I'm not daring to write a list of features for this as it would only be sorely incomplete.

    How to proceed from here on forth?

    What i would really like to do is have a braindump session somewhere with the people involved in this. Just brainstorm a little about the features we want to have an how we can get there. So if you are interested, let it be known in the comments and if you're willing to attend! For example, this could tie in very nicely with this 2021 theme proposal from @obo20.

    From there one, once we have a global idea of what we want, it's much easier to see if someone can make a proof of concept from something. Right now it's so broad and so vague that it's difficult to start anywhere at all.

    So i'm just tagging the people here that could be interested. Please add more in comments if you think someone else should be aware of it. @lidel @Stebalien @aschmahmann @autonome @momack2

    Let's get this ball rolling, get some clarity on where we want to go and perhaps make some proof of concepts :)

    Cheers, Mark

  • Flatpak store stored on IPFS

    Flatpak store stored on IPFS

    What do you guys think about the idea of asking the Flatpak project to integrate IPFS?

    I was wondering if there's interest in exploring the possibility of storing the Flatpak store on IPFS. Ideally without any compression, to let buzhash figuring out any diff of an update.

    If IPFS could be mounted, like via NFS (see https://github.com/ipfs/roadmap/issues/83), IPFS could become the storage for the application and fetch new versions on-the-fly when opening the app the next time.

    An example of how it could work:

    The app would be put unpacked to IPFS and published under an IPNS.

    On an update, the latest IPNS would be resolved and the IPFS would be mounted to ./.local/share/flatpak/app/$app-id/

    IPFS would be asked to store the app in the MFS under /flatpak-apps/$app-id/ for example and fetch it recursively.

    You could start the app immediately after an update, while IPFS still fetches the differences, the same goes for installations. But a warning about degraded performance while the fetching is still running would be good.

    Since we could use buzhash as a chunker and the files are stored uncompressed as a directory structure this would automatically do differential updates.

    I already build a similar project with https://github.com/RubenKelevra/pacman.store - the discussion which lead to this is archived here: https://github.com/ipfs/notes/issues/84

Related tags
Golang implementation of the research by @jonaslyk and the drafted PoC from @LloydLabs

Doge-SelfDelete Golang implementation of the research by @jonaslyk and the drafted PoC from @LloydLabs Golang ๅฎž็Žฐ็š„ๆ–‡ไปถ่‡ชๅˆ ้™ค,ๆฅ่‡ช@jonaslykๅ’Œ@LloydLabs etc add

Oct 2, 2022
contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...

contaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS, ...

Jan 4, 2023
Go-ipfs-cmds - Cmds offers tools for describing and calling commands both locally and remotely
Go-ipfs-cmds - Cmds offers tools for describing and calling commands both locally and remotely

Go-ipfs-cmds - Cmds offers tools for describing and calling commands both locally and remotely

Jan 18, 2022
Deece is an open, collaborative, and decentralised search mechanism for IPFS
Deece is an open, collaborative, and decentralised search mechanism for IPFS

Deece Deece is an open, collaborative, and decentralised search mechanism for IPFS. Any node running the client is able to crawl content on IPFS and a

Oct 29, 2022
IPFS Cluster - Automated data availability and redundancy on IPFS
IPFS Cluster - Automated data availability and redundancy on IPFS

IPFS Cluster - Automated data availability and redundancy on IPFS

Jan 2, 2023
Ipfs-retriever - An application that retrieves files from IPFS network

ipfs-retriever This is an application that retrieves files from IPFS network. It

Jan 5, 2022
A simple command line notebook for programmers
A simple command line notebook for programmers

Dnote is a simple command line notebook for programmers. It keeps you focused by providing a way of effortlessly capturing and retrieving information

Jan 2, 2023
Cloud-native way to provide elastic Jupyter Notebook services on Kubernetes
Cloud-native way to provide elastic Jupyter Notebook services on Kubernetes

elastic-jupyter-operator: Elastic Jupyter on Kubernetes Kubernetes ๅŽŸ็”Ÿ็š„ๅผนๆ€ง Jupyter ๅณๆœๅŠก ไป‹็ป ไธบ็”จๆˆทๆŒ‰้œ€ๆไพ›ๅผนๆ€ง็š„ Jupyter Notebook ๆœๅŠกใ€‚elastic-jupyter-operator ๆไพ›ไปฅไธ‹็‰นๆ€ง

Dec 29, 2022
Go-Notebook is inspired by Jupyter Project (link) in order to document Golang code.
Go-Notebook is inspired by Jupyter Project (link) in order to document Golang code.

Go-Notebook Go-Notebook is an app that was developed using go-echo-live-view framework, developed also by us. GitHub repository is here. For this proj

Jan 9, 2023
Go (golang) Jupyter Notebook kernel and an interactive REPL
Go (golang) Jupyter Notebook kernel and an interactive REPL

lgo Go (golang) Jupyter Notebook kernel and an interactive REPL Disclaimer Since go1.10, this Go kernel has performance issue due to a performance reg

Jan 1, 2023
A web server that sits beside jupyterhub and scrapes answers out of notebook files.

A Prototype grader tool that runs with jupyterhub that essentially parses jupyter notebooks and responds with a set of form fields automatically fille

Feb 22, 2022
Age-encrypted-notebook - Age encrypted notes saved in a bolt DB

Age Encrypted Notebook (aen) Disclaimer: This project has the sole purpose of ge

Sep 15, 2022
Collaborative Filtering (CF) Algorithms in Go!

Go Recommend Recommendation algorithms (Collaborative Filtering) in Go! Background Collaborative Filtering (CF) is oftentimes used for item recommenda

Dec 28, 2022
A recommender system service based on collaborative filtering written in Go

Language: English | ไธญๆ–‡ gorse: Go Recommender System Engine Build Coverage Report GoDoc RTD Demo gorse is an offline recommender system backend based o

Dec 29, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 8, 2023
GRONG is a DNS (Domain Name System) authoritative name server.It is more a research project than a production-ready program.

GRONG (Gross and ROugh Nameserver written in Go) is a DNS (Domain Name System) authoritative name server. It is intended as a research project and is

Oct 17, 2020
Selfhosted collaborative browser - room management for n.eko
Selfhosted collaborative browser - room management for n.eko

neko-rooms Simple room management system for n.eko. Self hosted rabb.it alternative. How to start You need to have installed Docker and docker-compose

Dec 20, 2022
Hetty is an HTTP toolkit for security research.
Hetty is an HTTP toolkit for security research.

Hetty is an HTTP toolkit for security research. It aims to become an open source alternative to commercial software like Burp Suite Pro, with powerful

Dec 27, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 2, 2023
"Go SQL DB" is a relational database that supports SQL queries for research purposes

A pure golang SQL database for database theory research

Jan 6, 2023