Hashrate (Hashing power or h/s) – BitcoinWiki

Why Osana takes so long? (Programmer's point of view on current situation)

I decided to write a comment about «Why Osana takes so long?» somewhere and what can be done to shorten this time. It turned into a long essay. Here's TL;DR of it:
The cost of never paying down this technical debt is clear; eventually the cost to deliver functionality will become so slow that it is easy for a well-designed competitive software product to overtake the badly-designed software in terms of features. In my experience, badly designed software can also lead to a more stressed engineering workforce, in turn leading higher staff churn (which in turn affects costs and productivity when delivering features). Additionally, due to the complexity in a given codebase, the ability to accurately estimate work will also disappear.
Junade Ali, Mastering PHP Design Patterns (2016)
Longer version: I am not sure if people here wanted an explanation from a real developer who works with C and with relatively large projects, but I am going to do it nonetheless. I am not much interested in Yandere Simulator nor in this genre in general, but this particular development has a lot to learn from for any fellow programmers and software engineers to ensure that they'll never end up in Alex's situation, especially considering that he is definitely not the first one to got himself knee-deep in the development hell (do you remember Star Citizen?) and he is definitely not the last one.
On the one hand, people see that Alex works incredibly slowly, equivalent of, like, one hour per day, comparing it with, say, Papers, Please, the game that was developed in nine months from start to finish by one guy. On the other hand, Alex himself most likely thinks that he works until complete exhaustion each day. In fact, I highly suspect that both those sentences are correct! Because of the mistakes made during early development stages, which are highly unlikely to be fixed due to the pressure put on the developer right now and due to his overall approach to coding, cost to add any relatively large feature (e.g. Osana) can be pretty much comparable to the cost of creating a fan game from start to finish. Trust me, I've seen his leaked source code (don't tell anybody about that) and I know what I am talking about. The largest problem in Yandere Simulator right now is its super slow development. So, without further ado, let's talk about how «implementing the low hanging fruit» crippled the development and, more importantly, what would have been an ideal course of action from my point of view to get out. I'll try to explain things in the easiest terms possible.
  1. else if's and lack any sort of refactoring in general
The most «memey» one. I won't talk about the performance though (switch statement is not better in terms of performance, it is a myth. If compiler detects some code that can be turned into a jump table, for example, it will do it, no matter if it is a chain of if's or a switch statement. Compilers nowadays are way smarter than one might think). Just take a look here. I know that it's his older JavaScript code, but, believe it or not, this piece is still present in C# version relatively untouched.
I refactored this code for you using C language (mixed with C++ since there's no this pointer in pure C). Take a note that else if's are still there, else if's are not the problem by itself.
The refactored code is just objectively better for one simple reason: it is shorter, while not being obscure, and now it should be able to handle, say, Trespassing and Blood case without any input from the developer due to the usage of flags. Basically, the shorter your code, the more you can see on screen without spreading your attention too much. As a rule of thumb, the less lines there are, the easier it is for you to work with the code. Just don't overkill that, unless you are going to participate in International Obfuscated C Code Contest. Let me reiterate:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
This is why refactoring — activity of rewriting your old code so it does the same thing, but does it quicker, in a more generic way, in less lines or simpler — is so powerful. In my experience, you can only keep one module/class/whatever in your brain if it does not exceed ~1000 lines, maybe ~1500. Splitting 17000-line-long class into smaller classes probably won't improve performance at all, but it will make working with parts of this class way easier.
Is it too late now to start refactoring? Of course NO: better late than never.
  1. Comments
If you think that you wrote this code, so you'll always easily remember it, I have some bad news for you: you won't. In my experience, one week and that's it. That's why comments are so crucial. It is not necessary to put a ton of comments everywhere, but just a general idea will help you out in the future. Even if you think that It Just Works™ and you'll never ever need to fix it. Time spent to write and debug one line of code almost always exceeds time to write one comment in large-scale projects. Moreover, the best code is the code that is self-evident. In the example above, what the hell does (float) 6 mean? Why not wrap it around into the constant with a good, self-descriptive name? Again, it won't affect performance, since C# compiler is smart enough to silently remove this constant from the real code and place its value into the method invocation directly. Such constants are here for you.
I rewrote my code above a little bit to illustrate this. With those comments, you don't have to remember your code at all, since its functionality is outlined in two tiny lines of comments above it. Moreover, even a person with zero knowledge in programming will figure out the purpose of this code. It took me less than half a minute to write those comments, but it'll probably save me quite a lot of time of figuring out «what was I thinking back then» one day.
Is it too late now to start adding comments? Again, of course NO. Don't be lazy and redirect all your typing from «debunk» page (which pretty much does the opposite of debunking, but who am I to judge you here?) into some useful comments.
  1. Unit testing
This is often neglected, but consider the following. You wrote some code, you ran your game, you saw a new bug. Was it introduced right now? Is it a problem in your older code which has shown up just because you have never actually used it until now? Where should you search for it? You have no idea, and you have one painful debugging session ahead. Just imagine how easier it would be if you've had some routines which automatically execute after each build and check that environment is still sane and nothing broke on a fundamental level. This is called unit testing, and yes, unit tests won't be able to catch all your bugs, but even getting 20% of bugs identified at the earlier stage is a huge boon to development speed.
Is it too late now to start adding unit tests? Kinda YES and NO at the same time. Unit testing works best if it covers the majority of project's code. On the other side, a journey of a thousand miles begins with a single step. If you decide to start refactoring your code, writing a unit test before refactoring will help you to prove to yourself that you have not broken anything without the need of running the game at all.
  1. Static code analysis
This is basically pretty self-explanatory. You set this thing once, you forget about it. Static code analyzer is another «free estate» to speed up the development process by finding tiny little errors, mostly silly typos (do you think that you are good enough in finding them? Well, good luck catching x << 4; in place of x <<= 4; buried deep in C code by eye!). Again, this is not a silver bullet, it is another tool which will help you out with debugging a little bit along with the debugger, unit tests and other things. You need every little bit of help here.
Is it too late now to hook up static code analyzer? Obviously NO.
  1. Code architecture
Say, you want to build Osana, but then you decided to implement some feature, e.g. Snap Mode. By doing this you have maybe made your game a little bit better, but what you have just essentially done is complicated your life, because now you should also write Osana code for Snap Mode. The way game architecture is done right now, easter eggs code is deeply interleaved with game logic, which leads to code «spaghettifying», which in turn slows down the addition of new features, because one has to consider how this feature would work alongside each and every old feature and easter egg. Even if it is just gazing over one line per easter egg, it adds up to the mess, slowly but surely.
A lot of people mention that developer should have been doing it in object-oritented way. However, there is no silver bullet in programming. It does not matter that much if you are doing it object-oriented way or usual procedural way; you can theoretically write, say, AI routines on functional (e.g. LISP)) or even logical language if you are brave enough (e.g. Prolog). You can even invent your own tiny programming language! The only thing that matters is code quality and avoiding the so-called shotgun surgery situation, which plagues Yandere Simulator from top to bottom right now. Is there a way of adding a new feature without interfering with your older code (e.g. by creating a child class which will encapsulate all the things you need, for example)? Go for it, this feature is basically «free» for you. Otherwise you'd better think twice before doing this, because you are going into the «technical debt» territory, borrowing your time from the future by saying «I'll maybe optimize it later» and «a thousand more lines probably won't slow me down in the future that much, right?». Technical debt will incur interest on its own that you'll have to pay. Basically, the entire situation around Osana right now is just a huge tale about how just «interest» incurred by technical debt can control the entire project, like the tail wiggling the dog.
I won't elaborate here further, since it'll take me an even larger post to fully describe what's wrong about Yandere Simulator's code architecture.
Is it too late to rebuild code architecture? Sadly, YES, although it should be possible to split Student class into descendants by using hooks for individual students. However, code architecture can be improved by a vast margin if you start removing easter eggs and features like Snap Mode that currently bloat Yandere Simulator. I know it is going to be painful, but it is the only way to improve code quality here and now. This will simplify the code, and this will make it easier for you to add the «real» features, like Osana or whatever you'd like to accomplish. If you'll ever want them back, you can track them down in Git history and re-implement them one by one, hopefully without performing the shotgun surgery this time.
  1. Loading times
Again, I won't be talking about the performance, since you can debug your game on 20 FPS as well as on 60 FPS, but this is a very different story. Yandere Simulator is huge. Once you fixed a bug, you want to test it, right? And your workflow right now probably looks like this:
  1. Fix the code (unavoidable time loss)
  2. Rebuild the project (can take a loooong time)
  3. Load your game (can take a loooong time)
  4. Test it (unavoidable time loss, unless another bug has popped up via unit testing, code analyzer etc.)
And you can fix it. For instance, I know that Yandere Simulator makes all the students' photos during loading. Why should that be done there? Why not either move it to project building stage by adding build hook so Unity does that for you during full project rebuild, or, even better, why not disable it completely or replace with «PLACEHOLDER» text for debug builds? Each second spent watching the loading screen will be rightfully interpreted as «son is not coding» by the community.
Is it too late to reduce loading times? Hell NO.
  1. Jenkins
Or any other continuous integration tool. «Rebuild a project» can take a long time too, and what can we do about that? Let me give you an idea. Buy a new PC. Get a 32-core Threadripper, 32 GB of fastest RAM you can afford and a cool motherboard which would support all of that (of course, Ryzen/i5/Celeron/i386/Raspberry Pi is fine too, but the faster, the better). The rest is not necessary, e.g. a barely functional second hand video card burned out by bitcoin mining is fine. You set up another PC in your room. You connect it to your network. You set up ramdisk to speed things up even more. You properly set up Jenkins) on this PC. From now on, Jenkins cares about the rest: tracking your Git repository, (re)building process, large and time-consuming unit tests, invoking static code analyzer, profiling, generating reports and whatever else you can and want to hook up. More importantly, you can fix another bug while Jenkins is rebuilding the project for the previous one et cetera.
In general, continuous integration is a great technology to quickly track down errors that were introduced in previous versions, attempting to avoid those kinds of bug hunting sessions. I am highly unsure if continuous integration is needed for 10000-20000 source lines long projects, but things can be different as soon as we step into the 100k+ territory, and Yandere Simulator by now has approximately 150k+ source lines of code. I think that probably continuous integration might be well worth it for Yandere Simulator.
Is it too late to add continuous integration? NO, albeit it is going to take some time and skills to set up.
  1. Stop caring about the criticism
Stop comparing Alex to Scott Cawton. IMO Alex is very similar to the person known as SgtMarkIV, the developer of Brutal Doom, who is also a notorious edgelord who, for example, also once told somebody to kill himself, just like… However, being a horrible person, SgtMarkIV does his job. He simply does not care much about public opinion. That's the difference.
  1. Go outside
Enough said. Your brain works slower if you only think about games and if you can't provide it with enough oxygen supply. I know that this one is probably the hardest to implement, but…
That's all, folks.
Bonus: Do you think how short this list would have been if someone just simply listened to Mike Zaimont instead of breaking down in tears?
submitted by Dezhitse to Osana [link] [comments]

Polkadot — An Early In-Depth Analysis — Part One — Overview and Benefits

Polkadot — An Early In-Depth Analysis — Part One — Overview and Benefits
Having recently researched Polkadot, as with other projects, I wanted to document what I had learnt, so that others may potential find it useful. Hopefully providing a balanced view, it will consist of three articles outlined below.
Part One — Polkadot Overview and Benefits (This article)
Part Two — In-Depth look at the Consensus
Part Three — Limitations and Issues
I will provide links throughout, providing reference to sections, as well as include a list of sources at the bottom of the article for further reading.
https://preview.redd.it/pr8hmkhhe6m51.png?width=700&format=png&auto=webp&s=58331d0411e684b4c511d59aeabeb789205d8a44

Overview

Frustrated with the slow development of Ethereum 2.0, Dr. Gavin Wood, co-founder of Ethereum and inventor of Solidity, left to begin work on Polkadot, a next generation scalable blockchain protocol that connects multiple specialised blockchains into one unified network. It achieves scalability through a sharding infrastructure with multiple blockchains running in parallel, called parachains, that connect to a central chain called the Relay Chain.
Whilst it shares some similarities with Ethereum 2.0, one key differentiator is that it uses heterogeneous sharding, where each parachains can be customised through the Substrate development framework, enabling them to be optimised for a specific use case and running in parallel rather than same across all shards. This is important as when it comes to blockchain architecture, one size does not fit all and all blockchains make trade-offs to support different features and use cases.
All parachains connect to the relay chain, which validates the state transition of connected parachains, providing shared state across the entire ecosystem. If the Relay Chain must revert for any reason, then all of the parachains would also revert. This is to ensure that the validity of the entire system can persist, and no individual part is corruptible. The shared state makes it so that the trust assumptions when using parachains are only those of the Relay Chain validator set, and no other. Since the validator set on the Relay Chain is expected to be secure with a large amount of stake put up to back it, it is desirable for parachains to benefit from this security.
This enables seamless interoperability between all parachains and parathreads using the Cross-chain Message Passing (XCMP) protocol, allowing arbitrary data — not just tokens — to be transferred across blockchains. Interoperability is also possible to other ecosystems through bridges, which are specifically designed parachains or parathreads that are custom made to interact with another ecosystem such as Ethereum, Bitcoin and Cosmos for example, enabling interoperability. Because these other ecosystems don’t use the same shared state of Polkadot, finality is incredibly important, because whilst the relay chain can roll back all the parachains, it can’t roll back the Ethereum or Bitcoin blockchains for example. This is discussed further in part three.
https://preview.redd.it/lmrz428je6m51.png?width=1000&format=png&auto=webp&s=237ad499f85e960ca50ca884234453ce283a60c0
The relay chain is responsible for the network’s shared security, consensus and cross-chain interoperability. It is secured by Validators and Nominators staking the native DOT tokens. Ultimately scalability for the ecosystem is determined by how scalable the relay chain can be. The number of parachains is determined by the number of validators on the relay chain. The hope is to reach 1000 validators, which would enable around 100 parachains. With each parachain being capable of around 1,000 transactions per second.
Nominators stake their DOT tokens with validators they trust, with the validators likely charging a small commission to cover running costs. If a validator is found to have performed misconduct a percentage of the their stake but also the nominators stake will be slashed depending upon the severity. For Level 4 security threats such as collusion and including an invalid block then 100% of the stake will be slashed.What’s really important to understand is that both the validators own stake and the nominated stake will be slashed, so you could lose all your DOT that you have staked against a validator if they perform maliciously. Therefore, it’s very important not to just try and maximise rewards and being oblivious to the risk, not only can you lose all your DOT, but you are making the entire system less secure (addressed in part three). There have already been several minor slashing incidents so far, so something to really consider.
https://preview.redd.it/aj9v0azke6m51.png?width=700&format=png&auto=webp&s=86134eaef08d1ef50466d1d80ec5ce151327d702

Auction for Parachain Slots

Due to the limited number of parachain slots available, there needs to be a method to decide who gets a parachain slot. This is achieved through a candle-auction where participants bid with DOT to secure a lease on a parchain slot to secure a 6 — 24 month period, with the highest bidders winning. DOT isn’t spent, but rather locked for the duration of the lease and unable to participate in staking and earn rewards. In the event they are unsuccessful in securing a further slot, then the lease expires and the DOT will be returned.
Of the 100 parachain slots that they hope to be able to accommodate, between 10 and 30 will be reserved for system parachains, with the remaining available for either auction slots or used for parathreads. Whilst the DOT is returned, due to the limited number of slots available this could result in significant amounts of DOT needing to be acquired to secure a slot. How the auction mechanics effect the price of DOT also remains to be seen, with potentially a rise from the start of the auction, followed by a fall before the lease ends and the DOT are returned. The plan is to continuously have a small amount of parachain auctions going throughout the year, to minimise any unwanted effects. How comfortable developers will be with locking significant amounts of funds in a highly volatile asset for an extended amount of time, also remains to be seen. They could also be in a position where they can no longer afford to keep their lease and have to downgrade to a parathread (providing the application will still function with the reduced performance or migrate to another platform). See this article for more details on the auction mechanism
https://preview.redd.it/wp8rvxlme6m51.png?width=387&format=png&auto=webp&s=496320d627405362142210e1a4c17ebe43e1f8a1

Parathreads

For applications that don’t require the guaranteed performance of a parachain or don’t want to pay the large fees to secure a parachain slot, then parathreads can be used instead. Parathreads have a fixed fee for registration that would realistically be much lower than the cost of acquiring a parachain slot and compete with other parathreads in a per-block auction to have their transactions included in the next relay chain block. A portion of the parachain slots on the Relay Chain will be designated as part of the parathread pool.
In the event that a parachain loses its slot then it can transition to a parathread (assuming the application can still function with the reduced and varied performance of sharing the slot between many). This also enables small projects to start out with a parathread and then upgrade to a parachain slot when required.

Token

DOT is the native token of the Polkadot network and serves three key functions. (i) It is staked to provide security for the relay chain, (ii) to be bonded to connect a chain to Polkadot as a parachain and (iii) to be used for governance of the network. There is an initial total supply of 1 billion DOT with yearly inflation estimated to be around 10% providing the optimal 50% staking rate is achieved, resulting in rewards of 20% to those that stake (net 10% when take into account inflation). Those that don’t stake lose 10% through dilution. Should the amount staked exceed the optimal 50% then reward rates reduce as well as inflation to make staking less attractive. Likewise if its below 50% then rewards and inflation rate will be higher to encourage staking. Staking isn’t risk free though as mentioned before.

Governance

Polkadot employs an on-chain governance model where in order to make any changes to the network, DOT holders vote on a proposal to upgrade the network with the help of the Council. The council is an entity comprising a 23 seats each represented by an on-chain account. Its goals are to represent passive stakeholders, submit sensible and important proposals, and cancel dangerous or malicious proposals. All DOT holders are free to register their candidacy for the Council, and free to vote for any number of candidates, with a voting power proportional to their stake.
Any stakeholder can submit a public proposal by depositing a fixed minimum amount of DOTs, which stays locked for a certain period. If someone agrees with the proposal, they may deposit the same amount of tokens to endorse it. Public proposals are stored in a priority queue, and at regular intervals the proposal with the most endorsements gets tabled for a referendum. The locked tokens are released once the proposal is tabled. Council proposals are submitted by the Council, and are stored in a separate priority queue where the priorities are set at the Council’s discretion.
Every thirty days, a new proposal will be tabled, and a referendum will come up for a vote. The proposal to be tabled is the top proposal from either the public-proposal queue or the Council-proposal queue, alternating between the two queues.
The Technical Committee is composed according to a single vote for each team that has successfully and independently implemented or formally specified the protocol in Polkadot, or in its canary network Kusama. The Technical Committee is the last line of defence for the system. Its sole purpose is detecting present or imminent issues in the system such as bugs in the code or security vulnerabilities, and proposing and fast-tracking emergency referenda.

Ecosystem

Whilst parachains aren’t currently implemented at this stage, there is a rapidly growing ecosystem looking to build on Polkadot with substrate. Polkadot’s “cousin”, the canary network Kusama used for experimentation, was launched last year by the same team and contributes to the early growth of the overall ecosystem. See here for a list of the current projects looking to build on Polkadot and filter by Substrate based.
https://preview.redd.it/rt8i0hqpe6m51.png?width=700&format=png&auto=webp&s=f6bcf26fa84463765f720c3074ee10157c2735f6
Now that we have covered the basics, in part two I will explain how the consensus mechanism in Polkadot works and covering more of the technical aspects.
submitted by xSeq22x to CryptoCurrency [link] [comments]

Hello Cardano - Introducing Aurum Stake Pools🚰 Faucets🚰

Hello Cardano community,
First of all, we’d like to introduce ourselves. Aurum is a group of industry leading software engineer professionals. We’ve begun this new project with cryptocurrency to try to explore different options for a viable long term company. We wanted to start our journey with Cardano, since this is a community we are excited about and want to be part of.
Now that all the introductions are done, let’s cut to the chase! What do we offer in our stake pools?
At Aurum, we have been brainstorming fun and innovative ways to introduce our pools to our delegators and during one of our conversations the idea of a Faucet came out. For those that are not familiar, this wikipedia page talks about Bitcoin faucet and what they were used for. https://en.wikipedia.org/wiki/Bitcoin_faucet
For our first pool (Ticker: AUSP, PoolId: 5007483bab60674a9000dced43d83717cad17b885b76377a4bfada1e), we have decided to give 1% of the 5% Stake pool operator reward percentage to 1 of our delegators each epoch, leaving us with the 4% to manage the pool.
So how does it work?
If you stake with our pool, once per epoch, you will still earn 95% of the rewards (like all the other pools) split by all the delegators; however, you will get the chance to earn the full 1% of the reward. So let’s plug some numbers, shall we ?
Since we think about the long term, let’s start by saying that all our examples are based on 30,000,000 Stake pool size (the maximum Pool Saturation when Cardano reaches 1000 stake pools limit).
Let’s assume that you have invested 1,000,000 ADA in the above stake pool which is at 29,000,000 total stake, reaching 30,000,000 of a Total stake.
You will be earning 49,694.50 ADA yearly or 680.746331274 ADA per epoch, which is 4.9694% reward per year. However, let’s assume there are 10 other delegators to the same pool, you will get 1/10 chance every epoch to earn an extra 217.107146151 ADA. Since there are 73 epochs in a year you could have the chance of getting it 7 times, increasing your yearly income of an extra 1519.75002306 ADA which would be a 5.12% reword per year.
At Aurum we understand that not everyone will stake the same amount and it would be unfair for the bigger delegators, however for our first pool we tried to keep it simple. We are working on tools to automate the process, as well as a website, where we could pair and match different stakeholders with the same amount of staking to the same pool.
Further, since we are a customer obsessed group, we are interested in knowing your opinion and any ideas you might have. At Aurum, there are different ideas we have been discussing around this Faucet concept. For example, we have discussed creating a pool where the stake operator reward goes only to small delegators (1000 ADA or less), or a stake pool where 50% goes to the Faucet and 45% goes to the delegators. Additionally, we have been discussing other ideas outside this Facet concept, such as stake pools as a service or privately managed stake pools. Let us know what you are interested about and we will definitely listen.
At Aurum, we are excited for the future and all of these projects.
Stay in touch, we are always open to questions and you can reach us on any of our social networks or our telegram channel mentioned below.
Let’s us know if we missed anything. We are excited to hear from you!
The Aurum team.
Twitter: https://twitter.com/austakepool
Telegram: https://t.me/austakepool
Website: https://www.austakepool.com
submitted by austakepool to CardanoStakePools [link] [comments]

Hello Cardano Community - Introducing Aurum Stake Pools🚰 Faucets🚰

Hello Cardano community,
First of all, we’d like to introduce ourselves. Aurum is a group of industry leading software engineer professionals. We’ve begun this new project with cryptocurrency to try to explore different options for a viable long term company. We wanted to start our journey with Cardano, since this is a community we are excited about and want to be part of.
Now that all the introductions are done, let’s cut to the chase! What do we offer in our stake pools?
At Aurum, we have been brainstorming fun and innovative ways to introduce our pools to our delegators and during one of our conversations the idea of a Faucet came out. For those that are not familiar, this wikipedia page talks about Bitcoin faucet and what they were used for. https://en.wikipedia.org/wiki/Bitcoin_faucet
For our first pool (ticker: AUSP), we have decided to give 1% of the 5% Stake pool operator reward percentage to 1 of our delegators each epoch, leaving us with the 4% to manage the pool.
So how does it work?
If you stake with our pool, once per epoch, you will still earn 95% of the rewards (like all the other pools) split by all the delegators; however, you will get the chance to earn the full 1% of the reward. So let’s plug some numbers, shall we ?
Since we think about the long term, let’s start by saying that all our examples are based on 30,000,000 Stake pool size (the maximum Pool Saturation when Cardano reaches 1000 stake pools limit).
Let’s assume that you have invested 1,000,000 ADA in the above stake pool which is at 29,000,000 total stake, reaching 30,000,000 of a Total stake.
You will be earning 49,694.50 ADA yearly or 680.746331274 ADA per epoch, which is 4.9694% reward per year. However, let’s assume there are 10 other delegators to the same pool, you will get 1/10 chance every epoch to earn an extra 217.107146151 ADA. Since there are 73 epochs in a year you could have the chance of getting it 7 times, increasing your yearly income of an extra 1519.75002306 ADA which would be a 5.12% reword per year.
At Aurum we understand that not everyone will stake the same amount and it would be unfair for the bigger delegators, however for our first pool we tried to keep it simple. We are working on tools to automate the process, as well as a website, where we could pair and match different stakeholders with the same amount of staking to the same pool.
Further, since we are a customer obsessed group, we are interested in knowing your opinion and any ideas you might have. At Aurum, there are different ideas we have been discussing around this Faucet concept. For example, we have discussed creating a pool where the stake operator reward goes only to small delegators (1000 ADA or less), or a stake pool where 50% goes to the Faucet and 45% goes to the delegators. Additionally, we have been discussing other ideas outside this Facet concept, such as stake pools as a service or privately managed stake pools. Let us know what you are interested about and we will definitely listen.
At Aurum, we are excited for the future and all of these projects.
Stay in touch, we are always open to questions and you can reach us on any of our social networks or our telegram channel mentioned below.
Let’s us know if we missed anything. We are excited to hear from you!
The Aurum team.
Twitter: https://twitter.com/austakepool
Telegram: https://t.me/austakepool
Website: https://www.austakepool.com
submitted by austakepool to u/austakepool [link] [comments]

Polkadot Launch AMA Recap

Polkadot Launch AMA Recap

The Polkadot Telegram AMA below took place on June 10, 2020

https://preview.redd.it/4ti681okap951.png?width=4920&format=png&auto=webp&s=e21f6a9a276d35bb9cdec59f46744f23c37966ef
AMA featured:
Dieter Fishbein, Ecosystem Development Lead, Web3 Foundation
Logan Saether, Technical Education, Web3 Foundation
Will Pankiewicz, Master of Validators, Parity Technologies
Moderated by Dan Reecer, Community and Growth, Polkadot & Kusama at Web3 Foundation

Transcription compiled by Theresa Boettger, Polkadot Ambassador:

Dieter Fishbein, Ecosystem Development Lead, Web3 Foundation

Dan: Hey everyone, thanks for joining us for the Polkadot Launch AMA. We have Dieter Fishbein (Head of Ecosystem Development, our business development team), Logan Saether (Technical Education), and Will Pankiewicz (Master of Validators) joining us today.
We had some great questions submitted in advance, and we’ll start by answering those and learning a bit about each of our guests. After we go through the pre-submitted questions, then we’ll open up the chat to live Q&A and the hosts will answer as many questions as they can.
We’ll start off with Dieter and ask him a set of some business-related questions.

Dieter could you introduce yourself, your background, and your role within the Polkadot ecosystem?

Dieter: I got my start in the space as a cryptography researcher at the University of Waterloo. This is where I first learned about Bitcoin and started following the space. I spent the next four years or so on the investment team for a large asset manager where I primarily focused on emerging markets. In 2017 I decided to take the plunge and join the space full-time. I worked at a small blockchain-focused VC fund and then joined the Polkadot team just over a year ago. My role at Polkadot is mainly focused on ensuring there is a vibrant community of projects building on our technology.

Q: Adoption of Polkadot of the important factors that all projects need to focus on to become more attractive to the industry. So, what is Polkadot's plan to gain more Adoption? [sic]

A (Dieter): Polkadot is fundamentally a developer-focused product so much of our adoption strategy is focused around making Polkadot an attractive product for developers. This has many elements. Right now the path for most developers to build on Polkadot is by creating a blockchain using the Substrate framework which they will later connect to Polkadot when parachains are enabled. This means that much of our adoption strategy comes down to making Substrate an attractive tool and framework. However, it’s not just enough to make building on Substrate attractive, we must also provide an incentive to these developers to actually connect their Substrate-based chain to Polkadot. Part of this incentive is the security that the Polkadot relay chain provides but another key incentive is becoming interoperable with a rich ecosystem of other projects that connect to Polkadot. This means that a key part of our adoption strategy is outreach focused. We go out there and try to convince the best projects in the space that building on our technology will provide them with significant value-add. This is not a purely technical argument. We provide significant support to projects building in our ecosystem through grants, technical support, incubatoaccelerator programs and other structured support programs such as the Substrate Builders Program (https://www.substrate.io/builders-program). I do think we really stand out in the significant, continued support that we provide to builders in our ecosystem. You can also take a look at the over 100 Grants that we’ve given from the Web3 Foundation: https://medium.com/web3foundation/web3-foundation-grants-program-reaches-100-projects-milestone-8fd2a775fd6b

Q: On moving forward through your roadmap, what are your most important next priorities? Does the Polkadot team have enough fundamentals (Funds, Community, etc.) to achieve those milestones?

A (Dieter): I would say the top priority by far is to ensure a smooth roll-out of key Polkadot features such as parachains, XCMP and other key parts of the protocol. Our recent Proof of Authority network launch was only just the beginning, it’s crucial that we carefully and successfully deploy features that allow builders to build meaningful technology. Second to that, we want to promote adoption by making more teams aware of Polkadot and how they can leverage it to build their product. Part of this comes down to the outreach that I discussed before but a major part of it is much more community-driven and many members of the team focus on this.
We are also blessed to have an awesome community to make this process easier 🙂

Q: Where can a list of Polkadot's application-specific chains can be found?

A (Dieter): The best list right now is http://www.polkaproject.com/. This is a community-led effort and the team behind it has done a terrific job. We’re also working on providing our own resource for this and we’ll share that with the community when it’s ready.

Q: Could you explain the differences and similarities between Kusama and Polkadot?

A (Dieter): Kusama is fundamentally a less robust, faster-moving version of Polkadot with less economic backing by validators. It is less robust since we will be deploying new technology to Kusama before Polkadot so it may break more frequently. It has less economic backing than Polkadot, so a network takeover is easier on Kusama than on Polkadot, lending itself more to use cases without the need for bank-like security.
In exchange for lower security and robustness, we expect the cost of a parachain lease to be lower on Kusama than Polkadot. Polkadot will always be 100% focused on security and robustness and I expect that applications that deal with high-value transactions such as those in the DeFi space will always want a Polkadot deployment, I think there will be a market for applications that are willing to trade cheap, high throughput for lower security and robustness such as those in the gaming, content distribution or social networking sectors. Check out - https://polkadot.network/kusama-polkadot-comparing-the-cousins/ for more detailed info!

Q: and for what reasons would a developer choose one over the other?

A (Dieter): Firstly, I see some earlier stage teams who are still iterating on their technology choosing to deploy to Kusama exclusively because of its lower-stakes, faster moving environment where it will be easier for them to iterate on their technology and build their user base. These will likely encompass the above sectors I identified earlier. To these teams, Polkadot becomes an eventual upgrade path for them if, and when, they are able to perfect their product, build a larger community of users and start to need the increased stability and security that Polkadot will provide.
Secondly, I suspect many teams who have their main deployment on Polkadot will also have an additional deployment on Kusama to allow them to test new features, either their tech or changes to the network, before these are deployed to Polkadot mainnet.

Logan Saether, Technical Education, Web3 Foundation

Q: Sweet, let's move over to Logan. Logan - could you introduce yourself, your background, and your role within the Polkadot ecosystem?

A (Logan): My initial involvement in the industry was as a smart contract engineer. During this time I worked on a few projects, including a reboot of the Ethereum Alarm Clock project originally by Piper Merriam. However, I had some frustrations at the time with the limitations of the EVM environment and began to look at other tools which could help me build the projects that I envisioned. This led to me looking at Substrate and completing a bounty for Web3 Foundation, after which I applied and joined the Technical Education team. My responsibilities at the Technical Education team include maintaining the Polkadot Wiki as a source of truth on the Polkadot ecosystem, creating example applications, writing technical documentation, giving talks and workshops, as well as helping initiatives such as the Thousand Validator Programme.

Q: The first technical question submitted for you was: "When will an official Polkadot mobile wallet appear?"

A (Logan): There is already an “official” wallet from Parity Technologies called the Parity Signer. Parity Signer allows you to keep your private keys on an air-gapped mobile device and to interactively sign messages using web interfaces such as Polkadot JS Apps. If you’re looking for something that is more of an interface to the blockchain as well as a wallet, you might be interested in PolkaWallet which is a community team that is building a full mobile interface for Polkadot.
For more information on Parity Signer check out the website: https://www.parity.io/signe

Q: Great thanks...our next question is: If someone already developed an application to run on Ethereum, but wants the interoperability that Polkadot will offer, are there any advantages to rebuilding with Substrate to run as a parachain on the Polkadot network instead of just keeping it on Ethereum and using the Ethereum bridge for use with Polkadot?

A (Logan): Yes, the advantage you would get from building on Substrate is more control over how your application will interact with the greater Polkadot ecosystem, as well as a larger design canvas for future iterations of your application.
Using an Ethereum bridge will probably have more cross chain latency than using a Polkadot parachain directly. The reason for this is due to the nature of Ethereum’s separate consensus protocol from Polkadot. For parachains, messages can be sent to be included in the next block with guarantees that they will be delivered. On bridged chains, your application will need to go through more routes in order to execute on the desired destination. It must first route from your application on Ethereum to the Ethereum bridge parachain, and afterward dispatch the XCMP message from the Polkadot side of the parachain. In other words, an application on Ethereum would first need to cross the bridge then send a message, while an application as a parachain would only need to send the message without needing to route across an external bridge.

Q: DOT transfers won't go live until Web3 removes the Sudo module and token holders approve the proposal to unlock them. But when will staking rewards start to be distributed? Will it have to after token transfers unlock? Or will accounts be able to accumulate rewards (still locked) once the network transitions to NPoS?

A (Logan): Staking rewards will be distributed starting with the transition to NPoS. Transfers will still be locked during the beginning of this phase, but reward payments are technically different from the normal transfer mechanism. You can read more about the launch process and steps at http://polkadot.network/launch-roadmap

Q: Next question is: I'm interested in how Cumulus/parachain development is going. ETA for when we will see the first parachain registered working on Kusama or some other public testnet like Westend maybe?

A (Logan): Parachains and Cumulus is a current high priority development objective of the Parity team. There have already been PoC parachains running with Cumulus on local testnets for months. The current work now is making the availability and validity subprotocols production ready in the Polkadot client. The best way to stay up to date would be to follow the project boards on GitHub that have delineated all of the tasks that should be done. Ideally, we can start seeing parachains on Westend soon with the first real parachains being deployed on Kusama thereafter.
The projects board can be viewed here: https://github.com/paritytech/polkadot/projects
Dan: Also...check out Basti's tweet from yesterday on the Cumulus topic: https://twitter.com/bkchstatus/1270479898696695808?s=20

Q: In what ways does Polkadot support smart contracts?

A (Logan): The philosophy behind the Polkadot Relay Chain is to be as minimal as possible, but allow arbitrary logic at the edges in the parachains. For this reason, Polkadot does not support smart contracts natively on the Relay Chain. However, it will support smart contracts on parachains. There are already a couple major initiatives out there. One initiative is to allow EVM contracts to be deployed on parachains, this includes the Substrate EVM module, Parity’s Frontier, and projects such as Moonbeam. Another initiative is to create a completely new smart contract stack that is native to Substrate. This includes the Substrate Contracts pallet, and the ink! DSL for writing smart contracts.
Learn more about Substrate's compatibility layer with Ethereum smart contracts here: https://github.com/paritytech/frontier

Will Pankiewicz, Master of Validators, Parity Technologies


Q: (Dan) Thanks for all the answers. Now we’ll start going through some staking questions with Will related to validating and nominating on Polkadot. Will - could you introduce yourself, your background, and your role within the Polkadot ecosystem?

A (Will): Sure thing. Like many others, Bitcoin drew me in back in 2013, but it wasn't until Ethereum came that I took the deep dive into working in the space full time. It was the financial infrastructure aspects of cryptocurrencies I was initially interested in, and first worked on dexes, algorithmic trading, and crypto funds. I really liked the idea of "Generalized Mining" that CoinFund came up with, and started to explore the whacky ways the crypto funds and others can both support ecosystems and be self-sustaining at the same time. This drew me to a lot of interesting experiments in what later became DeFi, as well as running validators on Proof of Stake networks. My role in the Polkadot ecosystem as “Master of Validators” is ensuring the needs of our validator community get met.

Q: Cool thanks. Our first community question was "Is it still more profitable to nominate the validators with lesser stake?"

A (Will): It depends on their commission, but generally yes it is more profitable to nominate validators with lesser stake. When validators have lesser stake, when you nominate them this makes your nomination stake a higher percentage of total stake. This means when rewards get distributed, it will be split more favorably toward you, as rewards are split by total stake percentage. Our entire rewards scheme is that every era (6 hours in Kusama, 24 hours in Polkadot), a certain amount of rewards get distributed, where that amount of rewards is dependent on the total amount of tokens staked for the entire network (50% of all tokens staked is currently optimal). These rewards from the end of an era get distributed roughly equally to all validators active in the validator set. The reward given to each validator is then split between the validators and all their nominators, determined by the total stake that each entity contributes. So if you contribute to a higher percentage of the total stake, you will earn more rewards.

Q: What does priority ranking under nominator addresses mean? For example, what does it mean that nominator A has priority 1 and nominator B has priority 6?

A (Will): Priority ranking is just the index of the nomination that gets stored on chain. It has no effect on how stake gets distributed in Phragmen or how rewards get calculated. This is only the order that the nominator chose their validators. The way that stake from a nominator gets distributed from a nominator to validators is via Phragmen, which is an algorithm that will optimally put stake behind validators so that distribution is roughly equal to those that will get in the validator set. It will try to maximize the total amount at stake in the network and maximize the stake behind minimally staked validators.

Q: On Polkadot.js, what does it mean when there are nodes waiting on Polkadot?

**A (Will):**In Polkadot there is a fixed validator set size that is determined by governance. The way validators get in the active set is by having the highest amount of total stake relative to other validators. So if the validator set size is 100, the top 100 validators by total stake will be in the validator set. Those not active in the validator set will be considered “waiting”.

Q: Another question...Is it necessary to become a waiting validator node right now?

A (Will): It's not necessary, but highly encouraged if you actively want to validate on Polkadot. The longer you are in the waiting tab, the longer you get exposure to nominators that may nominate you.

Q: Will current validators for Kusama also validate for Polkadot? How strongly should I consider their history (with Kusama) when looking to nominate a good validator for DOTs?

A (Will): A lot of Kusama validators will also be validators for Polkadot, as KSM was initially distributed to DOT holders. The early Kusama Validators will also likely be the first Polkadot validators. Being a Kusama validator should be a strong indicator for who to nominate on Polkadot, as the chaos that has ensued with Kusama has allowed validators to battle test their infrastructure. Kusama validators by now are very familiar with tooling, block explorers, terminology, common errors, log formats, upgrades, backups, and other aspects of node operation. This gives them an edge against Polkadot validators that may be new to the ecosystem. You should strongly consider well known Kusama validators when making your choices as a nominator on Polkadot.

Q: Can you go into more details about the process for becoming a DOT validator? Is it similar as the KSM 1000 validators program?

A (Will): The Process for becoming a DOT validators is first to have DOTs. You cannot be a validator without DOTs, as DOTs are used to pay transaction fees, and the minimum amount of DOTs you need is enough to create a validate transaction. After obtaining enough DOTs, you will need to set up your validator infrastructure. Ideally you should have a validator node with specs that match what we call standard hardware, as well as one or more sentry nodes to help isolate the validator node from attacks. After the infrastructure is up and running, you should have your Polkadot accounts set up right with a stash bonded to a controller account, and then submit a validate transaction, which will tell the network your nodes are ready to be a part of the network. You should then try and build a community around your validator to let others know you are trustworthy so that they will nominate you. The 1000 validators programme for Kusama is a programme that gives a certain amount of nominations from the Web3 Foundation and Parity to help bootstrap a community and reputation for validators. There may eventually be a similar type of programme for Polkadot as well.
Dan: Thanks a lot for all the answers, Will. That’s the end of the pre-submitted questions and now we’ll open the chat up to live Q&A, and our three team members will get through as many of your questions as possible.
We will take questions related to business development, technology, validating, and staking. For those wondering about DOT:
DOT tokens do not exist yet. Allocations of Polkadot's native DOT token are technically and legally non-transferable. Hence any publicized sale of DOTs is unsanctioned by Web3 Foundation and possibly fraudulent. Any official public sale of DOTs will be announced on the Web3 Foundation website. Polkadot’s launch process started in May and full network decentralization later this year, holders of DOT allocations will determine issuance and transferability. For those who participated in previous DOT sales, you can learn how to claim your DOTs here (https://wiki.polkadot.network/docs/en/claims).


Telegram Community Follow-up Questions Addressed Below


Q: Polkadot looks good but it confuses me that there are so many other Blockchain projects. What should I pay attention in Polkadot to give it the importance it deserves? What are your planning to achieve with your project?

A (Will): Personally, what I think differentiates it is the governance process. Coordinating forkless upgrades and social coordination helps stand it apart.
A (Dieter): The wiki is awesome - https://wiki.polkadot.network/

Q: Over 10,000 ETH paid as a transaction fee , what if this happens on Polkadot? Is it possible we can go through governance to return it to the owner?

A: Anything is possible with governance including transaction reversals, if a network quorum is reached on a topic.
A (Logan): Polkadot transaction fees work differently than the fees on Ethereum so it's a bit more difficult to shoot yourself in the foot as the whale who sent this unfortunate transaction. See here for details on fees: https://w3f-research.readthedocs.io/en/latest/polkadot/Token%20Economics.html?highlight=transaction%20fees#relay-chain-transaction-fees-and-per-block-transaction-limits
However, there is a tip that the user can input themselves which they could accidentally set to a large amount. In this cases, yes, they could proposition governance to reduce the amount that was paid in the tip.

Q: What is the minimum ideal amount of DOT and KSM to have if you want to become a validator and how much technical knowledge do you need aside from following the docs?

A (Will): It depends on what the other validators in the ecosystem are staking as well as the validator set size. You just need to be in the top staking amount of the validator set size. So if its 100 validators, you need to be in the top 100 validators by stake.

Q: Will Web3 nominate validators? If yes, which criteria to be elected?

A (Will): Web 3 Foundation is running programs like the 1000 validators programme for Kusama. There's a possibility this will continue on for Polkadot as well after transfers are enabled. https://thousand-validators.kusama.network/#/
You will need to be an active validator to earn rewards. Only those active in the validator set earn rewards. I would recommend checking out parts of the wiki: https://wiki.polkadot.network/docs/en/maintain-guides-validator-payout

Q: Is it possible to implement hastables or dag with substrate?

A (Logan): Yes.

Q: Polkadot project looks very futuristic! But, could you tell us the main role of DOT Tokens in the Polkadot Ecosystem?

A (Dan): That's a good question. The short answer is Staking, Governance, Bonding. More here: http://polkadot.network/dot-token

Q: How did you manage to prove that the consensus protocol is safe and unbreakable mathematically?

A (Dieter): We have a research teams of over a dozen scientists with PhDs and post-docs in cryptography and distributed computing who do thorough theoretical analyses on all the protocols used in Polkadot

Q: What are the prospects for NFT?

A: Already being built 🙂

Q: What will be Polkadot next roadmap for 2020 ?

A (Dieter): Building. But seriously - we will continue to add many more features and upgrades to Polkadot as well as continue to strongly focus on adoption from other builders in the ecosystem 🙂
A (Will): https://polkadot.network/launch-roadmap/
This is the launch roadmap. Ideally adding parachains and xcmp towards the end of the year

Q: How Do you stay active in terms of marketing developments during this PANDEMIC? Because I'm sure you're very excited to promote more after this settles down.

A (Dan): The main impact of covid was the impact on in-person events. We have been very active on Crowdcast for webinars since 2019, so it was quite the smooth transition to all-online events. You can see our 40+ past event recordings and follow us on Crowdcast here: https://www.crowdcast.io/polkadot. If you're interested in following our emails for updates (including online events), subscribe here: https://info.polkadot.network/subscribe

Q: Hi, who do you think is your biggest competitor in the space?

A (Dan): Polkadot is a metaprotocol that hasn't been seen in the industry up until this point. We hope to elevate the industry by providing interoperability between all major public networks as well as private blockchains.

Q: Is Polkadot a friend or competitor of Ethereum?

A: Polkadot aims to elevate the whole blockchain space with serious advancements in interoperability, governance and beyond :)

Q: When will there be hardware wallet support?

A (Will): Parity Signer works well for now. Other hardware wallets will be added pretty soon

Q: What are the attractive feature of DOT project that can attract any new users ?

A: https://polkadot.network/what-is-polkadot-a-brief-introduction/
A (Will): Buidling parachains with cross chain messaging + bridges to other chains I think will be a very appealing feature for developers

Q: According to you how much time will it take for Polkadot to get into mainstream adoption and execute all the plans set for this project?

A: We are solving many problems that have held back the blockchain industry up until now. Here is a summary in basic terms:
https://preview.redd.it/ls7i0bpm8p951.png?width=752&format=png&auto=webp&s=a8eb7bf26eac964f6b9056aa91924685ff359536

Q: When will bitpie or imtoken support DOT?

A: We are working on integrations on all the biggest and best wallet providers. ;)

Q: What event/call can we track to catch a switch to nPOS? Is it only force_new_era call? Thanks.

A (Will): If you're on riot, useful channels to follow for updates like this are #polkabot:matrix.org and #polkadot-announcements:matrix.parity.io
A (Logan): Yes this is the trigger for initiating the switch to NPoS. You can also poll the ForceEra storage for when it changes to ForceNew.

Q: What strategy will the Polkadot Team use to make new users trust its platform and be part of it?

A (Will): Pushing bleeding edge cryptography from web 3 foundation research
A (Dan): https://t.me/PolkadotOfficial/43378

Q: What technology stands behind and What are its advantages?

A (Dieter): Check out https://polkadot.network/technology/ for more info on our tech stack!

Q: What problems do you see occurring in the blockchain industry nowadays and how does your project aims to solve these problems?

A (Will): Governance I see as a huge problem. For example upgrading Bitcoin and making decisions for changing things is a very challenging process. We have robust systems of on-chain governance to help solve these coordination problems

Q: How involved are the Polkadot partners? Are they helping with the development?

A (Dieter): There are a variety of groups building in the Polkadot ecosystem. Check out http://www.polkaproject.com/ for a great list.

Q: Can you explain the role of the treasury in Polkadot?

A (Will): The treasury is for projects or people that want to build things, but don't want to go through the formal legal process of raising funds from VCs or grants or what have you. You can get paid by the community to build projects for the community.
A: There’s a whole section on the wiki about the treasury and how it functions here https://wiki.polkadot.network/docs/en/mirror-learn-treasury#docsNav

Q: Any plan to introduce Polkadot on Asia, or rising market on Asia?

**A (Will):**We're globally focused

Q: What kind of impact do you expect from the Council? Although it would be elected by token holders, what kind of people you wish to see there?

A (Will): Community focused individuals like u/jam10o that want to see cool things get built and cool communities form

If you have further questions, please ask in the official Polkadot Telegram channel.
submitted by dzr9127 to dot [link] [comments]

Mining and Dogecoin - Some FAQs

Hey shibes,
I see a lot of posts about mining lately and questions about the core wallet and how to mine with it, so here are some facts!
Feel free to add information to that thread or correct me if I did any mistake.

You downloaded the core wallet

Great! After a decade it probably synced and now you are wondering how to get coins? Bad news: You don't get coins by running your wallet, even running it as a full node. Check what a full node is here.
Maybe you thought so, because you saw a very old screenshot of a wallet, like this (Version 1.2). This version had a "Dig" tab where you can enter your mining configuration. The current version doesn't have this anymore, probably because it doesn't make sense anymore.

You downloaded a GPU/CPU miner

Nice! You did it, even your antivirus system probably went postal and you started covering all your webcams... But here is the bad news again: Since people are using ASIC miners, you just can't compete with your CPU hardware anymore. Even with your more advanced GPU you will have a hard time. The hashrate is too high for a desktop PC to compete with them. The blocks should be mined every 1 minute (or so) and that's causing the difficulty to go up - and we are out... So definitly check what is your hashrate while you are mining, you would need about 1.5 MH/s to make 1 Doge in 24 hours!

Mining Doge

Let us start with a quote:
"Dogecoin Core 1.8 introduces AuxPoW from block 371,337. AuxPoW is a technology which enables miners to submit work done while mining other coins, as work on the Dogecoin block chain."
- langerhans
What does this mean? You could waste your hashrate only on the Dogecoin chain, probably find never a block, but when, you only receive about 10.000 Dogecoins, currently worth about $25. Or you could apply your hashrate to LTC and Doge (and probably even more) at the same time. Your change of solving the block (finding the nonce) is your hashrate divided by the hashrat in sum - and this is about the same for Doge and LTC. This means you will always want to submit your work to all chains available!

Mining solo versus pool

So let's face it - mining solo won't get you anywhere, so let's mine on a pool! If you have a really bad Hashrate, please consider that: Often you need about $1 or $2 worth of crypto to receive a payout (without fees). This means, you have to get there. With 100 MH/s on prohashing, it takes about 6 days, running 24/7 to get to that threshold. Now you can do the math... 1 MH/s = 1000 KH/s, if you are below 1 MH/s, you probably won't have fun.

Buying an ASIC

You found an old BTC USB-miner with 24 GH/s (1 GH/s = 1000 MH/s) for $80 bucks - next stop lambo!? Sorry, bad news again, this hashrate is for SHA-256! If you want to mine LTC/Doge you will need a miner using scrypt with quite lower numbers on the hashrate per second, so don't fall for that. Often when you have a big miner (= also loud), you get more Hashrate per $ spent on the miner, but most will still run on a operational loss, because the electricity is too expensive and the miners will be outdated soon again. Leading me to my next point...

Making profit

You won't make money running your miner. Just do the math: What if you would have bougth a miner 1 year ago? Substract costs for electricity and then compare to: What if you just have bought coins. In most cases you would have a greater profit by just buying coins, maybe even with a "stable" coin like Doges.

Cloud Mining

Okay, this was a lot of text and you are still on the hook? Maybe you are desperated enough to invest in some cloud mining contract... But this isn't a good idea either, because most of such contracts are scams based on a ponzi scheme. You often can spot them easy, because they guarantee way to high profits, or they fake payouts that never happened, etc.
Just a thought: If someone in a subway says to you: Give me $1 and lets meet in one year, right here and I give you $54,211,841, you wouldn't trust him and if some mining contract says they will give you 5% a day it is basically the same.
Also rember the merged mining part. Nobody would offer you to mine Doges, they would offer you to buy a hashrate for scrypt that will apply on multiple chains.

Alternative coins

Maybe try to mine a coin where you don't have ASICs yet, like Monero and exchange them to Doge. If somebody already tried this - feel free to add your thoughts!

Folding at Home (Doge)

Some people say folding at home (FAH - https://www.dogecoinfah.com/) still the best. I just installed the tool and it says I would make 69.852 points a day, running on medium power what equates to 8 Doges. It is easy, it was fun, but it isn't much.
Thanks for reading
_nformant
submitted by _nformant to dogecoin [link] [comments]

Stop using number of git commits as any metric for anything, it's idiocy

Stop measuring git commits, it is stupid! On so many levels and from so many perspectives is number of commits a super duper terrible metrics.
Before I argue my statement, I would like to say that of course, it looks bad with absolutely NO public activity from developers over a long period of time (6-12 months). I say ‘public activity’ because there can be activity, as in code being written, without it being public. More about that below.
Some folks seem to be very keen on using the number of commits as an indicator for the success of a project. There are sites highlighting these irrelevant metrics, e.g. https://www.cryptomiso.com/

Short about me:

I have a masters degree in computer science and I've worked professionally as a developer for 9 years. I have developed two crypto libraries, one crypto wallet and I'm working on my second one. I will not mention which ones, primarily because it is irrelevant, secondly, I don't want this post to be downvoted for shilling any specific crypto project.

First let's re-iterate some important concepts

VCS (Version Control System), `git` is the most popular. `svn` is another, but older and not as used any more
Git is a VCS protocol, nothing else. Git is not Github. Github uses git
Github: is an American for-profit company owned by Microsoft (bought in 2018), it is one of the most popular code hosting platform using the Git protocol
Gitlab: an alternative to Github
Bitbucket: an alternative to Github, owned by Atlassian, who also develops Jira.

More info about Git:

“git commit”: A commit is like a bookmark, when you read a book, you can either use a bookmark on every page, or read the whole book without any bookmarks. The commit is just saved locally only your computer until you “git PUSH”, see below
“git push”: Sending your local commit or commits to any remote git repo, which is a project hosted by any code hosting platform, e.g. Github.
“git squash“: Some people like to do many commits while coding, but just prior to pushing the code, they “merge” together all commits into a single one.
"commit --amend": Let's say I just commited a change in the README, and then I noticed that I misspelled a word, then I can fix that commit (changing it), and fixing the misspelled word, by using `git commit --amend`. Some developers do that, other just fix the misspelled word in a new commit. The difference is that `git commit --amend` results in one single commit (changed), whereas the latter results in two commits.

Different methodology, but same code:

How often developers commit differs A LOT, and I mean completely. Personally, I tend to code for a couple of hours, days or even weeks without making a single commit (frowned upon by some). Whereas other developers might commit any changed line of code. When the code gets pushed to the VCS remote repo (e.g. Github), it is still the exact same code. But coming from me it can be a single commit, but coming from Alice it can be 1000 commits. Same code but a difference in a number of commits by 3 orders of magnitude.

Git squash

In the example above, maybe Alice committed 1000 times (whereas I committed once), but Alice also likes to have one single commit per feature/bugfix/improvement she is working on, so she git squashes and merges all here 1000 commits one. So now Alice method and my method ARE EXACTLY the same, when the code is pushed to GitHub. But it is impossible for us others to know that Alice single commit, actually was 1000 commits prior to being squashed.

Private repo’s

Even though most crypto projects are open-source, some code might not be open-sourced at first, but might be at a later point in time, so these repositories will be hidden from the public, thus there can be a lot of activity in a certain project without the public knowing about it.

Personal repo’s

Even though most companies/projects have all their repositories under the same organization in the VCS code hosting platform, some repo's relevant to the project might not be. E.g. if you look at Bitcoin's page on Github, you will find 4 repositories: https://github.com/bitcoin but some of its core developers might write experimental code in separate personal repo's (that might be private). Or repos not yet pushed, i.e. code sitting locally on her computer.

Forks

When Alice codes in a distributed project with many contributors it might be most suitable for her to not be using the projects repo directly, but rather a personal version of the whole code project, known as a fork (please not that this has nothing to do with 'forks of a DLT (e.g. blockchain)', as in spawning a new version of said crypto project, e.g. Litecoin and Bitcoin Cash being forks of Bitcoin, I'm talking about 'git fork' here). So Alice codes away in any branch, any number of commits, or a single one, in her own personal (git) fork of e.g. Bitcoin, her own repo. Then after some time (hours, days, weeks, months), she creates a Pull Request to the 'upstream repo' (original/source repo), and if other developers are happy with her work, it gets merged. So there might be activity, many of few commits, in another git repo, being a fork of the original one. The bitcoin C github repo currently has 23,958 git forks: https://github.com/bitcoin/bitcoin/network/members that's actually so many that Github displays this message "Woah, this network is huge! We’re showing only some of this network’s repositories". So in order for you to KNOW that there is NO activity for any developers, you would actually need to go through ALL forks (in this case ~24 thousand) of a repo to see that there have been no commits done recently. But as stated above, not even that is enough (the commits might not have been pushed yet, right?)

Irrelevant "wash" commits

I just coined the term "wash commits", so don't google it (you will only get images of jeans...LOL). Just like there are wash trades, faking volume, any developer can either manually or using some trivial script, at a regular interval just add some character, e.g. a space, in any file of a project, git commit and push that change, and then perform another git commit removing said newly appended character. Then it will look like the project has activity. Hell, you can even do this in 10,000 commits daily, "Wow man! Look at all that activity! This crypto project is the best!" - well, no.

No squash and no amend

Two developers, Alice and Bob, neither uses `git squash`, but Alice uses `git commit --amend` to fix typos an other smaller changes, but Bob does not. Since neither uses `git squash`, over a long period of time this might result in a huge difference in the number of commits.

Rebase vs Merge

When Alice and Bob, working in the same repo, wants to merge together their different features they have been working on, they can do so by using two different methods, either `git merge` or `git rebase`, they former results in one extra commit, a commit of the merge event itself, whereas the latter does not result in any extra commit. These are different styles of working and often debated which is to prefer. Over a long period of time this might result in a huge difference in the number of commits.

More LoC, worse

LoC = Lines of Code. The more lines of code, the worse, okay. Many LoC is NOT at all, in any way, a good thing. The theoretically (however, of course, impossible) best code base, is the code base with 0 lines of code. It is trivial to maintain, you just have to do... nothing. It contains NO bugs. Code is in its natural state buggy. So many commits ADDING new code are not always good. Better with commits removing code, given the same functionality.

"But but but... how can I easily determine which crypto project is best by looking at Gitlab/Bitbucket/Github?"

Well you can't, that is my point. But if you want some tips of what to look for, using these metrics are actually relevant:
  1. Number of contributors
  2. Number of forks
  3. Number of stars
  4. Number of pull requests (PR for short, called "Merge Request" in Gitlab), and how many of them are open? How fast does a PR get merged?
  5. Last commit date: WARNING for false positives! Remember "wash commits" (mentioned above), if the last commit date is recent, it does NOT necessarilty mean that the project is active, have a look at the commit. Does it look trivial or not? A trivial commit is e.g. a commit adding a newline/space in the README.
submitted by Sajjon to CryptoCurrency [link] [comments]

I bought $1000 worth of the Top Ten Cryptos on January 1st, 2018 (Feb 2020 Update)

I bought $1000 worth of the Top Ten Cryptos on January 1st, 2018 (Feb 2020 Update)

2018 \"Index Fund\" EXPERIMENT - Tracking Top 10 Cryptocurrencies of 2018 - Feb 2020/Month Twenty-Six Update - Down 81%
Note: the snapshot was taken on the 1st of March, so does not include the current COVID craziness. Stay tuned for next month's updates to see the result of the current crypto nose dive and how it compares to the current stock market nose dive.
Stay safe, wash your hands, take care of each other.
See the full blog post with all the tables here.

Month Twenty-Six – Down 81%

After a strong start to 2020, February saw a bit of a pullback with nearly every 2018 Top Ten crypto ending in the red. Ethereum is the notable exception, gaining +21% for the month.

Ranking and February Winners and Losers

A mixed month in terms of movement for this group of cryptos. NEM and Stellar made positive moves while Cardano and IOTA, fell two and four positions, respectively. Dash gave up some of the ground it made in January when it jumped an unprecedented 10 slots, but this month it fell from #16 back to #20.
For overall drop out rate, we’re back at the 50% mark: half of the cryptos that started 2018 in the Top Ten have dropped out, specifically NEM, Dash, IOTA, Cardano, and Stellar. They have been replaced by EOS, Binance Coin, Tezos, Tether, and BSV.
February WinnersEthereum easily outperformed the field this month with a +21% gain. NEM finished in second place, up +4%.
February Losers – All the other cryptos ended the month in the red. IOTA picks up the L this month, losing -28% of its value followed closely by Dash which finished down -27%.
For nerds): here is tally of which coins have the most monthly wins and losses in the first 26 months of the 2018 Top Ten Crypto Index Fund Experiment. Most monthly wins (6): Bitcoin. Most monthly losses (5): Stellar. All cryptos have at least one monthly win and Bitcoin now stands alone as the only crypto that hasn’t lost a month (although it came close in January 2020), when it gained “only” +31%).

Overall update – BTC far ahead, ETH takes second place from LTC, IOTA and NEM worst performing

No news here: Bitcoin is still well ahead of the field. Although down -35% since the beginning of 2018, BTC is still returning roughly double of the next crypto down, Ethereum – which, thanks to a strong February, has overtaken Litecoin for second place.
While NEM remains at the bottom, IOTA is dropping quickly. They are down -95% and -94% respectively.

Total Market Cap for the entire cryptocurrency sector:

The overall crypto market lost about $12B in February 2020, a non-event in the crypto world. Since January 2018, the total market cap is down about -57%.

Bitcoin dominance:

Bitcoin dominance ticked down another two points to 64% in February 2020. The last time BitDom was this low was back in July 2019. The range since the beginning of the experiment in January 2018 has been quite wide: a high of 70% in September 2019 and a low of 33% in February 2018.

Overall return on investment since January 1st, 2018:

The 2018 Top Ten Portfolio lost about $16 bucks in February 2020. If I cashed out today, my $1000 initial investment would return about $186, down -81% from January 2018.
Here’s a look at the ROI over the life of the experiment, month by month:
As you can see, nothing but red. The closest the 2018 Top Ten group has come to breaking even was after the very first month, when the portfolio was down -20%. It has been at the at least -80% loss level for the past seven months in a row.
The 2019 Top Ten Experiment and the just launched 2020 Top Ten Experiment are both doing much better:
Taking the three portfolios together, here’s the bottom bottom bottom line:
After a $3000 investment in the 2018, 2019, and 2020 Top Ten Cryptocurrencies, my portfolios are worth $‭3,170‬.
That’s up about +5.6%.

Implications/Observations:

As always, the experiment’s focus of solely holding the Top Ten Cryptos continues to be a losing approach. While the overall market is down -57% from January 2018, the cryptos that began 2018 in the Top Ten are down -81% over the same period. This of course implies that I would have done a bit better if I’d picked different cryptos.
At no point in this experiment has this investment strategy been successful: the initial 2018 Top Ten have under-performed each of the twenty-six months compared to the market overall.
There are a few examples, however, of this approach outperforming the overall market in the parallel 2019 Top Ten Crypto Experiment. And the first two months of the 2020 Experiment show that focusing on the Top Ten is a winning strategy.
I’m also tracking the S&P 500 as part of my experiment to have a comparison point with other popular investments options. After a rough coronavirus fueled week, the S&P 500 has lost a lot of ground. It is currently up only +11% since the beginning of 2018. The initial $1k investment into crypto would have yielded about +$110 had it been redirected to the S&P.
Taking the same drop-$1,000-per-year-on-January-1st approach with the S&P 500 that I’ve been documenting through the Top Ten Crypto Experiments would yield the following:
  • $1000 investment in S&P 500 on January 1st, 2018: +$110
  • $1000 investment in S&P 500 on January 1st, 2019: +$180
  • $1000 investment in S&P 500 on January 1st, 2020: -$90
Taken together, here’s the bottom bottom bottom line for a similar approach with the S&P:
After three $1,000 investments into an S&P 500 index fund in January 2018, 2019, and 2020, my portfolio would be worth $3,200.
That’s up about +6.7% compared to +5.6% with the Top Ten Crypto Experiments.
That’s getting pretty close now, eh? While this month’s update/snapshot is greatly influenced by the coronavirus stock market correction, a difference of only 1% is definitely worth noting.

Conclusion:

Not a great month for crypto, but not a horrible one, especially if you compare to the free fall in the stock market. Depending on how the coronavirus influences both traditional and crypto markets, we could be in for an interesting few months.
Thanks for reading and for supporting the experiment. I hope you’ve found it helpful. I continue to be committed to seeing this process through and reporting along the way. Feel free to reach out with any questions and stay tuned for progress reports. Keep an eye out for my parallel projects where I repeat the experiment twice, purchasing another $1000 ($100 each) of two new sets of Top Ten cryptos as of January 1st, 2019 then again on January 1st, 2020.
submitted by Joe-M-4 to CryptoCurrency [link] [comments]

What is Quant Networks Blockchain Operating System, Overledger? And why are Enterprises adopting it at mass scale?

What is Quant Networks Blockchain Operating System, Overledger? And why are Enterprises adopting it at mass scale?
Overledger is the world’s first blockchain operating system (OS) that not only inter-connects blockchains but also existing enterprise platforms, applications and networks to blockchain and facilitates the creation of internet scale multi-chain applications otherwise known as mApps.
In less than 10 months since launching Overledger they have provided interoperability with the full range of DLT technologies from all the leading Enterprise Permissioned blockchains such as Hyperledger, R3’s Corda, JP Morgan’s Quorum, permissioned variants of Ethereum and Ripple (XRPL) as well as the leading Public Permissionless blockchains / DAGs such as Bitcoin, Stellar, Ethereum, IOTA and EOS as well as the most recent blockchain to get added Binance Chain. In addition, Overledger also connects to Existing Networks / Off Chain / Oracle functionality and it does all of this in a way that is hugely scalable, without imposing restrictions / requiring blockchains to fork their code and can easily integrate into existing applications / networks by just adding 3 lines of code.

https://preview.redd.it/3t3z6hkbxel31.png?width=1920&format=png&auto=webp&s=ac989c2752c726e10d2291eb271721ceaa332a30

What is a blockchain Operating system?

You will be familiar with Operating systems such as Microsoft Windows, Apple Mac OS, Google’s Android etc but these are all Hardware based Operating Systems. Hardware based Operating Systems provide a platform to build and use applications that abstracts all of the complexities involved with integrating with all the hardware resources such as CPU, Memory, Storage, Mouse, Keyboard, Video etc so software can easily integrate with it. It provides interoperability between the Hardware devices and Software.
Overledger is a Blockchain Operating System, it provides a platform to build and use applications that abstracts all of the complexities involved with integrating with all the different blockchains, different OP_Codes being used, messaging formats etc as well as connecting to existing non-blockchain networks. It provides interoperability between Blockchains, Existing Networks and Software / MAPPs

How is Overledger different to other interoperability projects?

Other projects are trying to achieve interoperability by adding another blockchain on top of existing blockchains. This adds a lot of overhead, complexity, and technical risk. There are a few variants but essentially they either need to create custom connectors for each connected blockchain and / or require connected chains to fork their code to enable interoperability. An example of the process can be seen below:
User sends transaction to a multi sig contract on Blockchain A, wait for consensus to be reached on Blockchain A
A custom connector consisting of Off Chain Relay Nodes are monitoring transactions sent to the smart contract on Blockchain A. Once they see the transaction, they then sign a transaction on the Interoperability blockchain as proof the event has happened on Blockchain A.
Wait for consensus to be reached on the Interoperability Blockchain.
The DAPP running on the Interoperability Blockchain is then updated with the info about the transaction occurring on Blockchain A and then signs a transaction on the Interoperability blockchain to a multi sig contract on the Interoperability Blockchain.
Wait for consensus to be reached on the interoperability Blockchain.
A different custom connector consisting of Off Chain Relay Nodes are monitoring transactions sent to the Smart Contract on the Interoperability Blockchain which are destined for Blockchain B. Once they see the transaction, they sign a transaction on Blockchain B. Wait for consensus to be reached on Blockchain B.

https://preview.redd.it/xew1eu1exel31.png?width=1558&format=png&auto=webp&s=df960ded46d40fc9bf0ae8b54ff3b3b86276708a
Other solutions require every connecting blockchain to fork their code and implement their Interoperability protocol. This means the same type of connector can be used instead of a custom one for every blockchain however every connected blockchain has to fork their code to implement the protocol. This enforces a lot of restrictions on what the connected blockchains can implement going forward.

https://preview.redd.it/pe166qyexel31.png?width=1561&format=png&auto=webp&s=d4c982089276e64cd909537c9ce744b59e168b6d
Some problems with these methods:
  • They add a lot of Overhead / Latency. Rather than just having the consensus of Blockchain A and B, you add the consensus mechanism of the Interoperability Blockchain as well.
  • Decentralisation / transaction security is reduced. If Blockchain A and Blockchain B each have 1,000 nodes validating transactions, yet the Interoperability Blockchain only has 100 nodes then you have reduced the security of the transaction from being validated by 1000 to validated by 100.
  • Security of the Interoperability Blockchain must be greater than the sum of all transactions going through it. JP Morgan transfer $6 Trillion every day, if they move that onto blockchain and need interoperability between two Permissioned blockchains that have to connect via a public Interoperability blockchain, then it would always have to be more costly to attack the blockchain than the value from stealing the funds transacted through the blockchain.
  • Imposes a lot of limitations on connected blockchains to fork their code which may mean they have to drop some existing functionality as well as prevent them from adding certain features in the future.
  • Creates a single point of failure — If the Interoperability blockchain or connector has an issue then this affects each connected blockchain.
  • It doesn’t scale and acts as a bottleneck. Not only does building complex custom connectors not scale but the Interoperability blockchain that they are forcing all transactions to go through has to be faster than the combined throughput of connected blockchains. These Interoperability blockchains have limited tps, with the most being around 200 and is a trade off between performance and decentralisation.

But some Interoperability blockchains say they are infinitely scalable?

If the interoperability blockchain is limited to say 200 tps then the idea is to just have multiple instances of the blockchain and run them in parallel, so you benefit from the aggregated tps, but just how feasible is that? Lets say you want to connect Corda (capable of 2000+ tps) to Hyperledger (capable of up to 20,000 tps with recent upgrade). (Permissioned blockchains such as Hyperledger and Corda aren’t one big blockchain like say Bitcoin or Ethereum, they have separate instances for each consortium and each is capable of those speeds). So even when you have just 1 DAPP from one consortium that wants to connect Corda to Hyperledger and use 2000 tps for their DAPP, you would need 100 instances of the Interoperability blockchain, each with their own validators (which maybe 100–200 nodes each). So, 1 DAPP would need to cover the costs for 100 instances of the blockchain and running costs for 10,000 nodes…This is just one DAPP connected to one instance of a two permissioned blockchains, which are still in the early stages. Other blockchains such as Red Belly Blockchain can achieve 440,000 tps, and this will surely increase as the technology matures. There is also the added complexity of then aggregating the results / co-coordinating between the different instances of the blockchain. Then there are the environmental concerns, the power required for all of these instances / nodes is not sustainable.

https://preview.redd.it/yz2wvnhgxel31.png?width=1070&format=png&auto=webp&s=e6cb66e362b18e9924245a6a99e0eac4c9083308
It’s not just transactions per second of the blockchain as well, its the latency of all these added consensuses along the path to reach to the destination and not knowing whether the security of each of the hops is sufficient and can be trusted. To see examples of how this potential issue as well as others effect Cosmos you can see my article here. I recommend also reading a blog done by the CEO of Quant, Gilbert Verdian, which explains how Overledger differs here as well as detailed in the whitepaper here.

https://preview.redd.it/2cwj4k7hxel31.png?width=1169&format=png&auto=webp&s=d6fc49086f944089cef7ffa1dfc9d284107ad2e3

Overledger’s approach

In 1973 Vint Cerf invented the protocol that rules them all: TCP/IP. Most people have never heard of it. But it describes the fundamental architecture of the internet, and it made possible Wi-Fi, Ethernet, LANs, the World Wide Web, e-mail, FTP, 3G/4G — as well as all of the inventions built upon those inventions.
Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?Cerf: I’m not surprised at all because we designed it to do that.This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried. Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn’t know how that worked.And the other thing that we did was to make sure that the network didn’t know what the packets had in them. We didn’t encrypt them to prevent it from knowing — we just didn’t make it have to know anything. It’s just a bag of bits as far as the net was concerned.We were very successful in these two design features, because every time a new kind of communications technology came along, like frame relay or asynchronous transfer mode or passive optical networking or mobile radio‚ all of these different ways of communicating could carry internet packets.We would hear people saying, ‘The internet will be replaced by X25,’ or ‘The internet will be replaced by frame relay,’ or ‘The internet will be replaced by APM,’ or ‘The internet will be replaced by add-and-drop multiplexers.’Of course, the answer is, ‘No, it won’t.’ It just runs on top of everything. And that was by design. I’m actually very proud of the fact that we thought of that and carefully designed that capability into the system.
This is the approach Quant have taken with their Blockchain OS, Overledger to solve Blockchain interoperability. Compared to other Interoperability platforms that are trying to achieve interoperability at the transaction layer by connecting two blockchains via another blockchain, these will be ultimately be made redundant once faster methods are released. Overledger is designed to be future proof by isolating the layers so it doesn’t matter whether it’s a permissioned blockchain, permissionless, DAG, Legacy network, POW, POS etc because it abstracts the transaction layer from the messaging layer and runs on top of blockchains. Just as the Internet wasn’t replaced by X25, frame relay, APM etc, Overledger is designed to be future proof as it just runs on top of the Blockchains rather than being a blockchain itself. So, if a new blockchain technology comes out that is capable of 100,000 TPS then it can easily be integrated as Overledger just runs on top of it.
Likewise, with protocols such as HTTPS, SSH etc these will also emerge for blockchains such as ZK-Snarks and other privacy implementations as well as other features made available, all will be compatible with Overledger as its just sitting on top rather than forcing their own implementation for all.
It doesn’t require blockchains to fork their code to make it compatible, it doesn’t add the overhead of adding another blockchain with another consensus mechanism (most likely multiple as it has to go through many hops). All of this adds a lot of latency and restrictions which isn’t needed. The developer can just choose which blockchains they want to connect and use the consensus mechanisms of those blockchains rather than forced to use one.
Overledger can provide truly internet scale to meet whatever the demands may be, whether that be connecting multiple red belly blockchains together with 440,000 tps it doesn’t matter as it doesn’t add its consensus mechanism and uses proven internet scale technology such as that based on Kubernetes, which is where each task is split up into a self-contained container and each task is scaled out by deploying more to meet demand. Kubernetes is what runs Google Search engine where they scale up and down billions of containers every week.
Due to this being more of a summary, I strongly recommend you read this article which goes into detail about the different layers in Overledger.

https://preview.redd.it/1lpt98cixel31.png?width=1126&format=png&auto=webp&s=3928cf66cfe25bfce7dc84be7b6db670ac952ccf

But how does it provide the security of a blockchain if it doesn’t add its own blockchain?

This is often misunderstood by people. Overledger is not a blockchain however it still uses a blockchain for security, immutability, traceability etc, just rather than force people to use their own blockchain, it utilises the source and destination blockchains instead. The key thing to understand is the use of its patented technology TrustTag, which was made freely available to anyone with the Overledger SDK.
Please see this article which explains TrustTag in detail with examples showing how hashing / digital signatures work etc
A quick overview is if i want to send data from one blockchain to another the Overledger SDK using Trusttag will put the data through a hashing algorithm. The Hash is then included in digital signature as part of the transaction which is signed by the user’s private key and then validated through normal consensus and stored as metadata on the source blockchain. The message is then sent to the MAPP off chain. The MAPP periodically scans the blockchains and puts the received message through a hashing algorithm and compares the Hash to the one stored as metadata on the blockchain. This ensures that the message hasn’t been modified in transit, the message is encrypted and only the Hash is stored on chain so completely private, provides immutability as it was signed by the user’s private key which only they have and is stored on the blockchain for high availability and secure so that it can’t be modified, with the ability to refer back to it at any point in time.
Despite Overledger being a very secure platform, with the team having a very strong security background such as Gilbert who was chief security information officer for Vocalink (Bank of England) managing £6 trillion of payments every year and classified as national critical security (highest level you can get), ultimately you don’t need to trust Overledger. Transactions are signed and encrypted at client side, so Overledger has no way of being able to see the contents. It can’t modify any transaction as the digital signature which includes a hash of the transaction would be different so would get rejected. Transaction security isn’t reduced as it is signed at source using however many nodes the source blockchain has rather than a smaller amount of nodes with an interoperability blockchain in the middle.

Patents

The core code of Overledger is closed source and patented, one of the recent patents can be seen here, along with TrustTag and further ones are being filed. The Overledger SDK is open source and is available in Java and Javascript currently, with plans to support Pyhton and Ruby in the near future. Java and Javascript are the most popular programming languages used today.
The Blockchain connectors are also open source and this allows the community to create connectors to connect their favourite blockchain so that it can benefit from blockchain interoperability and making it available to all enterprises / developers currently utilising Overledger. Creating is currently taking around a week to implement and so far, have been added based upon client demand.

Multi Chain Applications (MAPPs)

Multi Chain Applications (MAPPs) enable an application to use multiple blockchains and interoperate between them. Treaty Contracts enable a developer to build a MAPP and then change the underlying blockchain it uses with just a quick change of couple of lines of code. This is vital for enterprises as it’s still early days in Blockchian and we don’t know which are going to be the best blockchain in the future. Overledger easily integrates into existing applications using the Overledger SDK by just adding 3 lines of code. They don’t need to completely rewrite the application like you do with the majority of other projects and all existing java / javascript apps on Windows / Mobile app stores / business applications etc can easily integrate with overledger with minimal changes in just 8 minutes.

Treaty Contracts

What Overledger will allow with Treaty contracts is to use popular programming languages such as Java and create a smart contract in Overledger that interacts with all of the connected blockchains. Even providing Smart contract functionality to blockchains that don’t support them such as Bitcoin. This means that developers don’t have to create all the smart contracts on each blockchain in all the different programming languages but instead just create them in Overledger using languages such as Java that are widely used today. If they need to use a different blockchain then it can be as easy as changing a line of code rather than having to completely rewrite the smart contracts.
Overledger isn’t a blockchain though, so how can it trusted with the smart contract? A Hash of the smart contract is published on any blockchain the MAPP developer requires and when called the smart contract is run its run through a hashing function to check that it matches the Hash value stored on the blockchain, ensuring that it has not been modified.
By running the Smart contract off chain this also increases Scalability enormously. With a blockchain all nodes have to run the smart contract one after another rather than in parallel. Not only do you get the performance benefit of not having to run the code against every single node but you can also run them in parallel to others executing smart contracts.
You can read more about Treaty Contracts here

The different versions of Overledger

Enterprise version

The current live version is the Enterprise version as that is where most of the adoption is taking place in blockchain due to permissioned blockchains being preferred until permissionless blockchains resolve the scalability, privacy and regulatory issues. Please see this article which goes into more details about Entereprise blockchain / adoption. The Enterprise version connects to permissioned blockchains as well as additional features / support suited for Enterprises.

Community version

The community version is due to be released later this year which will allow developers to benefit from creating MAPPs across permissionless blockchains. Developers can publish their MAPPs on the MAPP Store to create additional revenue streams for developers.

Where does Overledger run from? Is it Centralised?

Overledger can run from anywhere. The community version will have instances across multiple public clouds, Enterprises / developers may prefer to host the infrastructure themselves within a consortium which they can and are doing. For example SIA is the leading private Financial Network provider in Europe, it provides a dedicated high speed network which connects all the major banks, central banks, trading venues etc. SIA host Overledger within their private network so that all of those clients can access it in the confinement of their heavily regulated, secure, fast network. AUCloud / UKCLoud host Overledger in their environment to offer as a service to their clients which consist of Governments and critical national infrastructure.
For Blockchain nodes that interact with Overledger the choice is entirely up to the developer. Each member within a consortium may choose to host a node, some developers may prefer to use 3rd party hosting providers such as Infura, or Quant can also host them if they prefer, its entirely their choice.
Overledger allows for higher levels of decentralisation by storing the output across multiple blockchains so you not only benefit from the decentralisation of one blockchain but the combination of all of them. Ultimately though decentralisation is thrown around too much without many actually understanding what it means. It’s impossible to have complete decentralisation, when you sign a transaction to be added to a blockchain ultimately you still connect through a single ISP, connect through a single router, or the input into a transaction is done through a piece of software etc. What matters to be decentralised is where trust is involved. As i have mentioned before you don’t need to trust the OS, it’s just providing instructions on how to interact with the blockchains, the end user is signing the transactions / encrypting at client side. Nothing can be seen or modified with the OS. Even if somehow the transaction did get modified then it would get rejected when consensus is done as the hash / digital signature won’t match at the destination blockchain. Where the transaction actually gets put onto the blockchain is where decentralisation matters, because thats what needs to be trusted and conensus is reached and Overledger enables this to be written across multiple blockchains at the same time.

The Team

The team are very well connected with a wealth of experience at very senior roles at Global enterprises which I will include a few examples below. Gilbert Verdian the CEO was the Head of security for the payment infrastructure for the Bank of England through his CISO role with Vocalink (Mastercard)managing £6 trillion every year. This is treated by the government as critical national infrastructure which is the highest level of criticallity because its so fundamental to the security of the country. They have experience and know what it takes to run a secure financial infrastructure and meeting requirements of regulators. Gilbert was director for Cybersecurity at PWC, Security for HSBC and Ernst & Young as well as various government roles such as the CISO for the Australian NSW Health, Head of Security at the UK government for Ministry of Justice and HM Treasury in addition to being part of the committee for the European Commission, US Federal Reserve and the Bank of England.
Cecilia Harvey is the Chief Operating Officer, where she was previously a Director at HSBC in Global Banking and Markets and before that Director at Vocalink. Cecilia was also Chief Operating Officer at Citi for Markets and Securities Services Technology as well as working for Barclays, Accenture, IBM and Morgan Stanley.
Vijay Verma is the Overledger platform lead with over 15 years of developer experience in latest technologies like Java, Scala, Blockchain & enterprise technology solutions. Over the course of his career, he has worked for a number of prestigious organisations including J&J, Deutsche, HSBC, BNP Paribas, UBS Banks, HMRC and Network Rail.
Guy Dietrich, the managing director of Rockefeller Capital (manages $19 Billion in assets) has joined the board of Quant Network, and has recently personally attended meetings with the Financial Conduct Authority (FCA) with Gilbert

https://preview.redd.it/1x25xg78efl31.png?width=566&format=png&auto=webp&s=abea981ff40355eed2d0e3be1ca414c5b1b8573c
As well as advisors such as Paolo Tasca, the founder and Executive Director of the Centre for Blockchain Technologies (UCL CBT) at University College Londonfounder and executive director as well as Chris Adelsbach, Managing Director at Techstars, the worldwide network that helps entrepreneurs succeed. Techstars has partners such as Amazon, Barclays, Boeing, Ford, Google, Honda, IBM, Microsoft, PWC, Sony, Target, Total, Verizon, Western Union etc.
Due to client demand they are expanding to the US to setup a similar size office where board members such as Guy Dietrich will be extremely valuable in assisting with the expansion.
https://twitter.com/gverdian/status/1151549142235340800
The most exciting part about the project though is just how much adoption there has been of the platform, from huge global enterprises, governments and cloud providers they are on track for a revenue of $10 million in their first year. I will go through these in the next article, followed by further article explaining how the Token and Treasury works.
You can also find out more info about Quant at the following:
Part One — Blockchain Fundamentals
Part Two — The Layers Of Overledger
Part Three — TrustTag and the Tokenisation of data
Part Four — Features Overledger provides to MAPPs
Part Five — Creating the Standards for Interoperability
Part Six — The Team behind Overledger and Partners
Part Seven — The QNT Token
Part Eight — Enabling Enterprise Mass Adoption
Quant Network Enabling Mass Adoption of Blockchain at a Rapid Pace
Quant Network Partner with SIA, A Game Changer for Mass Blockchain Adoption by Financial Institutions
submitted by xSeq22x to QuantNetwork [link] [comments]

What is Quant Networks Blockchain Operating System, Overledger? And why are Enterprises adopting it at mass scale?

What is Quant Networks Blockchain Operating System, Overledger? And why are Enterprises adopting it at mass scale?
Overledger is the world’s first blockchain operating system (OS) that not only inter-connects blockchains but also existing enterprise platforms, applications and networks to blockchain and facilitates the creation of internet scale multi-chain applications otherwise known as mApps.
In less than 10 months since launching Overledger they have provided interoperability with the full range of DLT technologies from all the leading Enterprise Permissioned blockchains such as Hyperledger, R3’s Corda, JP Morgan’s Quorum, permissioned variants of Ethereum and Ripple (XRPL) as well as the leading Public Permissionless blockchains / DAGs such as Bitcoin, Stellar, Ethereum, IOTA and EOS as well as the most recent blockchain to get added Binance Chain. In addition, Overledger also connects to Existing Networks / Off Chain / Oracle functionality and it does all of this in a way that is hugely scalable, without imposing restrictions / requiring blockchains to fork their code and can easily integrate into existing applications / networks by just adding 3 lines of code.

https://preview.redd.it/30jclqe3wel31.png?width=1920&format=png&auto=webp&s=2bcce5d296c3a287dccdd28b72877ca9e03a5f31

What is a blockchain Operating system?

You will be familiar with Operating systems such as Microsoft Windows, Apple Mac OS, Google’s Android etc but these are all Hardware based Operating Systems. Hardware based Operating Systems provide a platform to build and use applications that abstracts all of the complexities involved with integrating with all the hardware resources such as CPU, Memory, Storage, Mouse, Keyboard, Video etc so software can easily integrate with it. It provides interoperability between the Hardware devices and Software.
Overledger is a Blockchain Operating System, it provides a platform to build and use applications that abstracts all of the complexities involved with integrating with all the different blockchains, different OP_Codes being used, messaging formats etc as well as connecting to existing non-blockchain networks. It provides interoperability between Blockchains, Existing Networks and Software / MAPPs

How is Overledger different to other interoperability projects?

Other projects are trying to achieve interoperability by adding another blockchain on top of existing blockchains. This adds a lot of overhead, complexity, and technical risk. There are a few variants but essentially they either need to create custom connectors for each connected blockchain and / or require connected chains to fork their code to enable interoperability. An example of the process can be seen below:
User sends transaction to a multi sig contract on Blockchain A, wait for consensus to be reached on Blockchain A
A custom connector consisting of Off Chain Relay Nodes are monitoring transactions sent to the smart contract on Blockchain A. Once they see the transaction, they then sign a transaction on the Interoperability blockchain as proof the event has happened on Blockchain A.
Wait for consensus to be reached on the Interoperability Blockchain.
The DAPP running on the Interoperability Blockchain is then updated with the info about the transaction occurring on Blockchain A and then signs a transaction on the Interoperability blockchain to a multi sig contract on the Interoperability Blockchain.
Wait for consensus to be reached on the interoperability Blockchain.
A different custom connector consisting of Off Chain Relay Nodes are monitoring transactions sent to the Smart Contract on the Interoperability Blockchain which are destined for Blockchain B. Once they see the transaction, they sign a transaction on Blockchain B. Wait for consensus to be reached on Blockchain B.
https://preview.redd.it/2apm3pb5wel31.png?width=1558&format=png&auto=webp&s=7027514706d7b12690b1be8f4f4af7cfc9c43354
Other solutions require every connecting blockchain to fork their code and implement their Interoperability protocol. This means the same type of connector can be used instead of a custom one for every blockchain however every connected blockchain has to fork their code to implement the protocol. This enforces a lot of restrictions on what the connected blockchains can implement going forward.

https://preview.redd.it/4axzxx57wel31.png?width=1561&format=png&auto=webp&s=a8c3de8468ef9b67bc1db75cffbef81ef8c0aa70
Some problems with these methods:
  • They add a lot of Overhead / Latency. Rather than just having the consensus of Blockchain A and B, you add the consensus mechanism of the Interoperability Blockchain as well.
  • Decentralisation / transaction security is reduced. If Blockchain A and Blockchain B each have 1,000 nodes validating transactions, yet the Interoperability Blockchain only has 100 nodes then you have reduced the security of the transaction from being validated by 1000 to validated by 100.
  • Security of the Interoperability Blockchain must be greater than the sum of all transactions going through it. JP Morgan transfer $6 Trillion every day, if they move that onto blockchain and need interoperability between two Permissioned blockchains that have to connect via a public Interoperability blockchain, then it would always have to be more costly to attack the blockchain than the value from stealing the funds transacted through the blockchain.
  • Imposes a lot of limitations on connected blockchains to fork their code which may mean they have to drop some existing functionality as well as prevent them from adding certain features in the future.
  • Creates a single point of failure — If the Interoperability blockchain or connector has an issue then this affects each connected blockchain.
  • It doesn’t scale and acts as a bottleneck. Not only does building complex custom connectors not scale but the Interoperability blockchain that they are forcing all transactions to go through has to be faster than the combined throughput of connected blockchains. These Interoperability blockchains have limited tps, with the most being around 200 and is a trade off between performance and decentralisation.

But some Interoperability blockchains say they are infinitely scalable?

If the interoperability blockchain is limited to say 200 tps then the idea is to just have multiple instances of the blockchain and run them in parallel, so you benefit from the aggregated tps, but just how feasible is that? Lets say you want to connect Corda (capable of 2000+ tps) to Hyperledger (capable of up to 20,000 tps with recent upgrade). (Permissioned blockchains such as Hyperledger and Corda aren’t one big blockchain like say Bitcoin or Ethereum, they have separate instances for each consortium and each is capable of those speeds). So even when you have just 1 DAPP from one consortium that wants to connect Corda to Hyperledger and use 2000 tps for their DAPP, you would need 100 instances of the Interoperability blockchain, each with their own validators (which maybe 100–200 nodes each). So, 1 DAPP would need to cover the costs for 100 instances of the blockchain and running costs for 10,000 nodes…This is just one DAPP connected to one instance of a two permissioned blockchains, which are still in the early stages. Other blockchains such as Red Belly Blockchain can achieve 440,000 tps, and this will surely increase as the technology matures. There is also the added complexity of then aggregating the results / co-coordinating between the different instances of the blockchain. Then there are the environmental concerns, the power required for all of these instances / nodes is not sustainable.

https://preview.redd.it/myjx8t29wel31.png?width=1070&format=png&auto=webp&s=550ac862c3c5b46df8ed42cf37282cad0a960819
It’s not just transactions per second of the blockchain as well, its the latency of all these added consensuses along the path to reach to the destination and not knowing whether the security of each of the hops is sufficient and can be trusted. To see examples of how this potential issue as well as others effect Cosmos you can see my article here. I recommend also reading a blog done by the CEO of Quant, Gilbert Verdian, which explains how Overledger differs here as well as detailed in the whitepaper here.

https://preview.redd.it/m9036lzfwel31.png?width=1169&format=png&auto=webp&s=50e54198a97106b3921f79ca928f7e808a5529d7

Overledger’s approach

In 1973 Vint Cerf invented the protocol that rules them all: TCP/IP. Most people have never heard of it. But it describes the fundamental architecture of the internet, and it made possible Wi-Fi, Ethernet, LANs, the World Wide Web, e-mail, FTP, 3G/4G — as well as all of the inventions built upon those inventions.
***Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?***Cerf: I’m not surprised at all because we designed it to do that.This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried. Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn’t know how that worked.And the other thing that we did was to make sure that the network didn’t know what the packets had in them. We didn’t encrypt them to prevent it from knowing — we just didn’t make it have to know anything. It’s just a bag of bits as far as the net was concerned.We were very successful in these two design features, because every time a new kind of communications technology came along, like frame relay or asynchronous transfer mode or passive optical networking or mobile radio‚ all of these different ways of communicating could carry internet packets.We would hear people saying, ‘The internet will be replaced by X25,’ or ‘The internet will be replaced by frame relay,’ or ‘The internet will be replaced by APM,’ or ‘The internet will be replaced by add-and-drop multiplexers.’Of course, the answer is, ‘No, it won’t.’ It just runs on top of everything. And that was by design. I’m actually very proud of the fact that we thought of that and carefully designed that capability into the system.
This is the approach Quant have taken with their Blockchain OS, Overledger to solve Blockchain interoperability. Compared to other Interoperability platforms that are trying to achieve interoperability at the transaction layer by connecting two blockchains via another blockchain, these will be ultimately be made redundant once faster methods are released. Overledger is designed to be future proof by isolating the layers so it doesn’t matter whether it’s a permissioned blockchain, permissionless, DAG, Legacy network, POW, POS etc because it abstracts the transaction layer from the messaging layer and runs on top of blockchains. Just as the Internet wasn’t replaced by X25, frame relay, APM etc, Overledger is designed to be future proof as it just runs on top of the Blockchains rather than being a blockchain itself. So, if a new blockchain technology comes out that is capable of 100,000 TPS then it can easily be integrated as Overledger just runs on top of it.
Likewise, with protocols such as HTTPS, SSH etc these will also emerge for blockchains such as ZK-Snarks and other privacy implementations as well as other features made available, all will be compatible with Overledger as its just sitting on top rather than forcing their own implementation for all.
It doesn’t require blockchains to fork their code to make it compatible, it doesn’t add the overhead of adding another blockchain with another consensus mechanism (most likely multiple as it has to go through many hops). All of this adds a lot of latency and restrictions which isn’t needed. The developer can just choose which blockchains they want to connect and use the consensus mechanisms of those blockchains rather than forced to use one.
Overledger can provide truly internet scale to meet whatever the demands may be, whether that be connecting multiple red belly blockchains together with 440,000 tps it doesn’t matter as it doesn’t add its consensus mechanism and uses proven internet scale technology such as that based on Kubernetes, which is where each task is split up into a self-contained container and each task is scaled out by deploying more to meet demand. Kubernetes is what runs Google Search engine where they scale up and down billions of containers every week.
Due to this being more of a summary, I strongly recommend you read this article which goes into detail about the different layers in Overledger.

https://preview.redd.it/6x7tjq9jwel31.png?width=1126&format=png&auto=webp&s=52ac5b9ebb45908ef6070d2eed6d107d380da1df

But how does it provide the security of a blockchain if it doesn’t add its own blockchain?

This is often misunderstood by people. Overledger is not a blockchain however it still uses a blockchain for security, immutability, traceability etc, just rather than force people to use their own blockchain, it utilises the source and destination blockchains instead. The key thing to understand is the use of its patented technology TrustTag, which was made freely available to anyone with the Overledger SDK.
Please see this article which explains TrustTag in detail with examples showing how hashing / digital signatures work etc
A quick overview is if i want to send data from one blockchain to another the Overledger SDK using Trusttag will put the data through a hashing algorithm. The Hash is then included in digital signature as part of the transaction which is signed by the user’s private key and then validated through normal consensus and stored as metadata on the source blockchain. The message is then sent to the MAPP off chain. The MAPP periodically scans the blockchains and puts the received message through a hashing algorithm and compares the Hash to the one stored as metadata on the blockchain. This ensures that the message hasn’t been modified in transit, the message is encrypted and only the Hash is stored on chain so completely private, provides immutability as it was signed by the user’s private key which only they have and is stored on the blockchain for high availability and secure so that it can’t be modified, with the ability to refer back to it at any point in time.
Despite Overledger being a very secure platform, with the team having a very strong security background such as Gilbert who was chief security information officer for Vocalink (Bank of England) managing £6 trillion of payments every year and classified as national critical security (highest level you can get), ultimately you don’t need to trust Overledger. Transactions are signed and encrypted at client side, so Overledger has no way of being able to see the contents. It can’t modify any transaction as the digital signature which includes a hash of the transaction would be different so would get rejected. Transaction security isn’t reduced as it is signed at source using however many nodes the source blockchain has rather than a smaller amount of nodes with an interoperability blockchain in the middle.

Patents

The core code of Overledger is closed source and patented, one of the recent patents can be seen here, along with TrustTag and further ones are being filed. The Overledger SDK is open source and is available in Java and Javascript currently, with plans to support Pyhton and Ruby in the near future. Java and Javascript are the most popular programming languages used today.
The Blockchain connectors are also open source and this allows the community to create connectors to connect their favourite blockchain so that it can benefit from blockchain interoperability and making it available to all enterprises / developers currently utilising Overledger. Creating is currently taking around a week to implement and so far, have been added based upon client demand.

Multi Chain Applications (MAPPs)

Multi Chain Applications (MAPPs) enable an application to use multiple blockchains and interoperate between them. Treaty Contracts enable a developer to build a MAPP and then change the underlying blockchain it uses with just a quick change of couple of lines of code. This is vital for enterprises as it’s still early days in Blockchian and we don’t know which are going to be the best blockchain in the future. Overledger easily integrates into existing applications using the Overledger SDK by just adding 3 lines of code. They don’t need to completely rewrite the application like you do with the majority of other projects and all existing java / javascript apps on Windows / Mobile app stores / business applications etc can easily integrate with overledger with minimal changes in just 8 minutes.

Treaty Contracts

What Overledger will allow with Treaty contracts is to use popular programming languages such as Java and create a smart contract in Overledger that interacts with all of the connected blockchains. Even providing Smart contract functionality to blockchains that don’t support them such as Bitcoin. This means that developers don’t have to create all the smart contracts on each blockchain in all the different programming languages but instead just create them in Overledger using languages such as Java that are widely used today. If they need to use a different blockchain then it can be as easy as changing a line of code rather than having to completely rewrite the smart contracts.
Overledger isn’t a blockchain though, so how can it trusted with the smart contract? A Hash of the smart contract is published on any blockchain the MAPP developer requires and when called the smart contract is run its run through a hashing function to check that it matches the Hash value stored on the blockchain, ensuring that it has not been modified.
By running the Smart contract off chain this also increases Scalability enormously. With a blockchain all nodes have to run the smart contract one after another rather than in parallel. Not only do you get the performance benefit of not having to run the code against every single node but you can also run them in parallel to others executing smart contracts.
You can read more about Treaty Contracts here

The different versions of Overledger

Enterprise version

The current live version is the Enterprise version as that is where most of the adoption is taking place in blockchain due to permissioned blockchains being preferred until permissionless blockchains resolve the scalability, privacy and regulatory issues. Please see this article which goes into more details about Entereprise blockchain / adoption. The Enterprise version connects to permissioned blockchains as well as additional features / support suited for Enterprises.

Community version

The community version is due to be released later this year which will allow developers to benefit from creating MAPPs across permissionless blockchains. Developers can publish their MAPPs on the MAPP Store to create additional revenue streams for developers.

Where does Overledger run from? Is it Centralised?

Overledger can run from anywhere. The community version will have instances across multiple public clouds, Enterprises / developers may prefer to host the infrastructure themselves within a consortium which they can and are doing. For example SIA is the leading private Financial Network provider in Europe, it provides a dedicated high speed network which connects all the major banks, central banks, trading venues etc. SIA host Overledger within their private network so that all of those clients can access it in the confinement of their heavily regulated, secure, fast network. AUCloud / UKCLoud host Overledger in their environment to offer as a service to their clients which consist of Governments and critical national infrastructure.
For Blockchain nodes that interact with Overledger the choice is entirely up to the developer. Each member within a consortium may choose to host a node, some developers may prefer to use 3rd party hosting providers such as Infura, or Quant can also host them if they prefer, its entirely their choice.
Overledger allows for higher levels of decentralisation by storing the output across multiple blockchains so you not only benefit from the decentralisation of one blockchain but the combination of all of them. Ultimately though decentralisation is thrown around too much without many actually understanding what it means. It’s impossible to have complete decentralisation, when you sign a transaction to be added to a blockchain ultimately you still connect through a single ISP, connect through a single router, or the input into a transaction is done through a piece of software etc. What matters to be decentralised is where trust is involved. As i have mentioned before you don’t need to trust the OS, it’s just providing instructions on how to interact with the blockchains, the end user is signing the transactions / encrypting at client side. Nothing can be seen or modified with the OS. Even if somehow the transaction did get modified then it would get rejected when consensus is done as the hash / digital signature won’t match at the destination blockchain. Where the transaction actually gets put onto the blockchain is where decentralisation matters, because thats what needs to be trusted and conensus is reached and Overledger enables this to be written across multiple blockchains at the same time.

The Team

The team are very well connected with a wealth of experience at very senior roles at Global enterprises which I will include a few examples below. Gilbert Verdian the CEO was the Head of security for the payment infrastructure for the Bank of England through his CISO role with Vocalink (Mastercard)managing £6 trillion every year. This is treated by the government as critical national infrastructure which is the highest level of criticallity because its so fundamental to the security of the country. They have experience and know what it takes to run a secure financial infrastructure and meeting requirements of regulators. Gilbert was director for Cybersecurity at PWC, Security for HSBC and Ernst & Young as well as various government roles such as the CISO for the Australian NSW Health, Head of Security at the UK government for Ministry of Justice and HM Treasury in addition to being part of the committee for the European Commission, US Federal Reserve and the Bank of England.
Cecilia Harvey is the Chief Operating Officer, where she was previously a Director at HSBC in Global Banking and Markets and before that Director at Vocalink. Cecilia was also Chief Operating Officer at Citi for Markets and Securities Services Technology as well as working for Barclays, Accenture, IBM and Morgan Stanley.
Vijay Verma is the Overledger platform lead with over 15 years of developer experience in latest technologies like Java, Scala, Blockchain & enterprise technology solutions. Over the course of his career, he has worked for a number of prestigious organisations including J&J, Deutsche, HSBC, BNP Paribas, UBS Banks, HMRC and Network Rail.
Guy Dietrich, the managing director of Rockefeller Capital (manages $19 Billion in assets) has joined the board of Quant Network, and has recently personally attended meetings with the Financial Conduct Authority (FCA) with Gilbert

https://preview.redd.it/wj5ubgv4efl31.png?width=566&format=png&auto=webp&s=2c0cb650f6aceae3d133beefdac04ba0aeea63f6
As well as advisors such as Paolo Tasca, the founder and Executive Director of the Centre for Blockchain Technologies (UCL CBT) at University College Londonfounder and executive director as well as Chris Adelsbach, Managing Director at Techstars, the worldwide network that helps entrepreneurs succeed. Techstars has partners such as Amazon, Barclays, Boeing, Ford, Google, Honda, IBM, Microsoft, PWC, Sony, Target, Total, Verizon, Western Union etc.
Due to client demand they are expanding to the US to setup a similar size office where board members such as Guy Dietrich will be extremely valuable in assisting with the expansion.
https://preview.redd.it/7zlrragqffl31.png?width=578&format=png&auto=webp&s=36980e86da6d050f086eb2171f679ac1716f97dc
The most exciting part about the project though is just how much adoption there has been of the platform, from huge global enterprises, governments and cloud providers they are on track for a revenue of $10 million in their first year. I will go through these in the next article, followed by further article explaining how the Token and Treasury works.
You can also find out more info about Quant at the following:
Part One — Blockchain Fundamentals
Part Two — The Layers Of Overledger
Part Three — TrustTag and the Tokenisation of data
Part Four — Features Overledger provides to MAPPs
Part Five — Creating the Standards for Interoperability
Part Six — The Team behind Overledger and Partners
Part Seven — The QNT Token
Part Eight — Enabling Enterprise Mass Adoption
Quant Network Enabling Mass Adoption of Blockchain at a Rapid Pace
Quant Network Partner with SIA, A Game Changer for Mass Blockchain Adoption by Financial Institutions
submitted by xSeq22x to CryptoCurrency [link] [comments]

A new Nano trade exchange was launched a couple of days ago - Here's why anyone who remembers Bitgrail should be be very afraid

This is mostly a duplicate of my summarised responses to that exchange's original announcement threads here and in /nanotrade**.**
The owners of that new exchange are welcome to downvote this new posting of mine, but everyone else can make their own decision about whether to upvote this for posterity's benefit, so that my post gets found in months to come (given that Reddit's search engine is really bad at searching comments.)

Here's what's dodgy about this exchange, and why anyone who remembers BitGrail should be very afraid:
I emphasise that I'm not saying this is a scam. But I am saying it looks like what a scam website would look like and appears to be operating illegally under UK law:

* The domain NanoTrade.co.uk was registered only two days (one working day) before the site went live - meaning it only could have been tested very briefly, (Edit: and could not have been secured against accidentally being taken by another person, which implies incredible lack of planning)
* The domain was registered by NameCheap, with an obscured registrant
* The site was not pre-announced on /Nano Currency nor /NanoTrade
* The site was announced first on /nanotrade (6k subscribers), but only announced here on /nanocurrency (44k subscribers) three days later, demonstrating a lack of knowledge of Nano relative subscribers
* The announcer Paradise2GE claimed Google and Facebook Ads existed prelaunch - but no one of you here saw them because you would have excitedly told us if you had
* No such advertising for the site was noted by me, yet I would have been a key target for it
* Since the domain was not registered, such advertising could not have included, say, a [[email protected]](mailto:[email protected]) mail form, which is incompetent in itself
* Nano Associates Limited was registered with Companies House on 2018-02-28 (as company number 11229688) - to a London mail-forwarding company's central London address (a postcode with 11,000 companies registered at the same address)
* Its sole Director 'Orlando Carugo' has no Google history of being associated with Nano, Cryptocurrency, or finance at all
* Orlando Carugo's profession is listed as 'Sales Director'
* The Google and LinkedIn history of an UK-based 'Orlando Carugo' (a sales professional) can be found, with no reference to cryptocurrency but with references to being willing to work for stock options or commission
* The announcement on /NanoTrade was made by Paradise2GE - an account with a few questions on Bitcoin two years ago, a single comment one year ago, then absolutely nothing until this weekend.
* We know nothing of the overall reputation of the company owners
* The company uses a payment processor https://en.bitcoin.it/wiki/VirWoX which has a daily limit on PayPal withdrawals of 2500EUR. The NanoTrade website however states that up to 90,000 Nano can be sold per PayPal transaction. It seems impossible to reconcile this for me. If 90,000 Nano is sold to PayPal, when would the seller get their money?
* If the answer is that such payments cannot be made, when were the company planning to tell the sellers?
* On being asked on /Nanotrade, Paradise2GE has avoided answering this question in their dissembling answers
* The company is not registered by the FCA
* The company is not registered by HMRC as a money service
* It is a legal requirement for UK based companies to register as money services.
* It is not a get-out to just deal in USD and not GDP, yet Patadise2GE has attempted to use that get-out in their answers
* Paradise2GE has dissembled when answering questions as to why Nano Associates is not registered with FCA or HMRC
* Paradise2GE claimed 1000 trades on their first day... for an unannounced site...purely from a posting that had around 40 up votes at the time. It's a lie. Not even Nanex gets that many trades on a good day.
* Given a supposed 1000 trades in a day, 1440 minutes in a day, and 25% of Nano staked on Binance, we should expect to see >5Nano transactions coming out of the Binance hot wallet at least every 10 minutes or so. I don't see those
* The site works poorly on Android, showing a lack of technical skill in its developers
* The announcement of the exchange on /nanocurrency was by SMcArthurs - a one year old account, with very few postings, and no history in the /nanocurrency or /nanotrade subs
* Someone downvoted my reasonable questions on /nanotrade. I can think of only one person who would want to do that, if malicious
* Someone downvoted someone thanking me for my questions on /nanotrade. I can think of only one person who would want to do that, if malicious
* User astricali has posted at 2018-12-03 05:00GMT that they made a successful sale of 999.99 Nano. I performed a text extraction of the posted image address, which they were apparently instructed to send to, but I notice that although that address indeed Received a 999.99 amount from the Binance Representative at 2018-12-04 02:17:01 (Timezone unknown), the address has never actually Pocketed 999.99 Nano - the payment is still Pending.This is odd, since once might expect an efficient provider to pocket the Nano they receive asap, to sell it on the markets
* The address has however received a range of 8 payments ranging from 1.99 Nano to 1,847.99 Nano. That surprises me, because I would expect a payment provider to use a unique address for each received payment.
* The address has only ever sent a payment once, to KuCoin, on an unknown date before 2018-07-25
* Paradise2GE and SMcArthurs have been extremely quiet in response to these complaints levelled against their site
* Edit: Why would they be selling Nano at 1.00 USD at the moment when Nano is on Binance at 0.94?
* Edit: Seeing 403 Forbidden error 2018-12-05 23:38GMT
* Edit: Don't even get me started on the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations Regulations 2017
* Edit: It's 12 days later - 18-Dec-2018. The 999 Nano still hasn't been pocketed at address. In 12 days neither promoter has denied that's it's their account. They are therefore probably incompetent even if non-malicious.
* Also noticed something I should have picked up before: That address is Represented by Nanowallet.io - which means it's probably an account created on Nanowallet.io. Any 'real' exchange would need, at the very minimum, to run it's own node so that it can be online 24/7 - and if so would most likely Represent itself. Specifically choosing Nanowallet.io as their Representative after installing a full node would be an odd decision, given that they could help decentralize.
'm at way over 22 red flags here. I hope I save someone.
submitted by throwawayLouisa to nanocurrency [link] [comments]

Mining City Bitcoin vault show time with the CEOs all the ... MIND CAPITAL BITCOIN SECRET FROM SIMON STEPSYS EARN $1000 ... BITCOIN TRADING THE BIGGEST WAY TO EARN BITCOIN EARN 1000 ... ¿CÓMO GANAR DINERO CON BITCOINS y CRIPTOMONEDAS? **Tips ... How To Cash In $1000 In BitCoin For Less Than Average ATM ...

Bitcoin Revolution claims that for a minimum investment of two hundred fifty dollars ($250.00), investors can earn as much as one thousand dollars ($1,000.00) or three hundred percent (300%) per ... Hashrate (Hash per second, h/s) is an SI-derived unit representing the number of double SHA-256 computations performed in one second in the bitcoin network for cryptocurrency mining. Hashrate is also called as hashing power. It is usually symbolized as h/s (with an appropriate SI prefix). At all times, Bitcoin Core will cap fees at `-maxtxfee=<x>;` (default: 0.10) BTC. Furthermore, Bitcoin Core will never create transactions smaller than the current minimum relay fee. Finally, a user can set the minimum fee rate for all transactions, which defaults to 1000 satoshis per kB. Note that a typical transaction is 500 bytes. Compared to Bitcoin, Litecoin features faster transaction confirmation times (2.5 minutes) and improved storage efficiency. With substantial industry support, trade volume, and liquidity, Litecoin is a proven medium of commerce complementary to Bitcoin. Litecoin is the second most popular cryptocurrency. For more information, visit the Litecoin page. Getting Started; Download the Litecoin Core ... From Bitcoin Wiki. Jump to: navigation, search. Hubi is a global digital asset exchange alliance, who created the world's first “Exchange Alliance 3.0” model, everyone is a Super Node. Hubi's exclusive “Platform + Alliance + Ecological” advantages provide global customers with one-stop trading platform with lower entry threshold, better services and higher liquidity. . Contents. 1 Main ...

[index] [13242] [51478] [1191] [51040] [43354] [3328] [8070] [32205] [7407] [17720]

Mining City Bitcoin vault show time with the CEOs all the ...

Share your videos with friends, family, and the world Dont waste time and start your passive income with Mind Capital now: https://t.me/mindcapitalrobot?start=Mindcapital2020 You can join our official Channel he... After bitcoin's dramatic drop this month, what are the likely lower targets - and how do we calculate them? We explain. For more on bitcoin visit: https://ww... Mining City Bitcoin vault show time with the CEOs all the way to $1000 per COIN Phillipines legal operations license To get started contact me +27768114553 #... Zijn bitcoins de toekomst van onze geldzaken of de grootste financiële bubbel sinds de tulpengekte? In deze webserie onderzoekt Nadia wat bitcoin is en hoe j...

#