Saturday, February 29, 2020

AutoMLPipeline – Create and evaluate machine learning pipeline architectures

Documentation Build Status Help

AutoMLPipeline

is a package that makes it trivial to create complex ML pipeline structures using simple expressions. Using Julia macro programming features, it becomes trivial to symbolically process and manipulate the pipeline expressions and its elements to automatically discover optimal structures for machine learning prediction and classification.

Load the AutoMLPipeline package and submodules

using AutoMLPipeline, AutoMLPipeline.FeatureSelectors, AutoMLPipeline.EnsembleMethods using AutoMLPipeline.CrossValidators, AutoMLPipeline.DecisionTreeLearners, AutoMLPipeline.Pipelines using AutoMLPipeline.BaseFilters, AutoMLPipeline.SKPreprocessors, AutoMLPipeline.Utils

Load some of filters, transformers, learners to be used in a pipeline

#### Decomposition pca = SKPreprocessor("PCA"); fa = SKPreprocessor("FactorAnalysis"); ica = SKPreprocessor("FastICA") #### Scaler  rb = SKPreprocessor("RobustScaler"); pt = SKPreprocessor("PowerTransformer"); norm = SKPreprocessor("Normalizer"); mx = SKPreprocessor("MinMaxScaler") #### categorical preprocessing ohe = OneHotEncoder() #### Column selector catf = CatFeatureSelector(); numf = NumFeatureSelector() #### Learners rf = SKLearner("RandomForestClassifier"); gb = SKLearner("GradientBoostingClassifier") lsvc = SKLearner("LinearSVC"); svc = SKLearner("SVC") mlp = SKLearner("MLPClassifier"); ada = SKLearner("AdaBoostClassifier") jrf = RandomForest(); vote = VoteEnsemble(); stack = StackEnsemble(); best = BestLearner();

Load data

using CSV profbdata = CSV.read(joinpath(dirname(pathof(AutoMLPipeline)),"../data/profb.csv")) X = profbdata[:,2:end] Y = profbdata[:,1] |> Vector; head(x)=first(x,5) head(profbdata)

Filter categories and hot-encode them

pohe = @pipeline catf |> ohe tr = fit_transform!(pohe,X,Y) head(tr)

Filter numeric features, compute ica and pca features, and combine both features

pdec = @pipeline (numf |> pca) + (numf |> ica) tr = fit_transform!(pdec,X,Y) head(tr)

A pipeline expression example for classification using the Voting Ensemble learner

# take all categorical columns and hotbit encode each,  # concatenate them to the numerical features, # and feed them to the voting ensemble pvote = @pipeline (catf |> ohe) + (numf) |> vote pred = fit_transform!(pvote,X,Y) sc=score(:accuracy,pred,Y) println(sc) ### cross-validate crossvalidate(pvote,X,Y,"accuracy_score",5)

Print corresponding function call of the pipeline expression

@pipelinex (catf |> ohe) + (numf) |> vote # outputs: :(Pipeline(ComboPipeline(Pipeline(catf, ohe), numf), vote))

Another pipeline example using the RandomForest learner

# combine the pca, ica, fa of the numerical columns, # combine them with the hot-bit encoded categorial features # and feed all to the random forest classifier prf = @pipeline (numf |> rb |> pca) + (numf |> rb |> ica) + (catf |> ohe) + (numf |> rb |> fa) |> rf pred = fit_transform!(prf,X,Y) score(:accuracy,pred,Y) |> println crossvalidate(prf,X,Y,"accuracy_score",5)

A pipeline for the Linear Support Vector for Classification

plsvc = @pipeline ((numf |> rb |> pca)+(numf |> rb |> fa)+(numf |> rb |> ica)+(catf |> ohe )) |> lsvc pred = fit_transform!(plsvc,X,Y) score(:accuracy,pred,Y) |> println crossvalidate(plsvc,X,Y,"accuracy_score",5)

Extending AutoMLPipeline

# If you want to add your own filter/transformer/learner, it is trivial. # Just take note that filters and transformers expect one input argument # while learners expect input and output arguments in the fit! function. # transform! function always expect one input argument in all cases. # First, import the abstract types and define your own mutable structure # as subtype of either Learner or Transformer. Also load the DataFrames package using DataFrames import AutoMLPipeline.AbsTypes: fit!, transform! #for function overloading export fit!, transform!, MyFilter # define your filter structure mutable struct MyFilter <: Transformer variables here.... function MyFilter() .... end end #define your fit! function. # filters and transformer ignore Y argument. # learners process both X and Y arguments. function fit!(fl::MyFilter, X::DataFrame, Y::Vector=Vector()) .... end #define your transform! function function transform!(fl::MyFilter, X::DataFrame)::DataFrame .... end # Note that the main data interchange format is a dataframe so transform! # output should always be a dataframe as well as the input for fit! and transform!. # This is necessary so that the pipeline passes the dataframe format consistently to # its filters/transformers/learners. Once you have this filter, you can use # it as part of the pipeline together with the other learners and filters. 

Feature Requests and Contributions

We welcome contributions, feature requests, and suggestions. Here is the link to open an issue for any problems you encounter. If you want to contribute, please follow the guidelines in contributors page.

Help usage

Usage questions can be posted in:



from Hacker News https://github.com/IBM/AutoMLPipeline.jl

An efficient way to use Uniflow


An efficient way to use Uniflow

Photo from Max Bender

TL;DR

This article is about how we can use uniflow, what is the benefit of using this library and how easy it is to use it. Don’t worry we will cover the testing. We will see how easy it is to test our ViewModel

Uniflow

Uniflow help you write your app with a simple unidirectional data flow approach, to ensure consistency through the time and this with Kotlin Coroutines.

Let’s create a sample app using Uniflow

Most application have a profile screen let’s focus on this use-case. What benefits will uniflow give us in our use-case? There are many benefits, let me point them out :

  • Easy way to test
  • We just need to write States & Events
  • Single source of truth
  • We can use coroutines
  • A smart way to write a Data flow in pure Kotlin
Click Me

State

Let’s write a class we will use to represent a state in our application. We will call it UserProfileState. All we need is a data class. We need to define only this data class because we will use generic Loading/Retry states

What is a state: A state is something that we need to remain/keep the UI logical data. This is the reason why our state has only a name, an email and a mobile, to have to feed our UI.

Event

What is an event: An event is a “one-shot” action, we don’t need to retain it.

We will have a few events:

  • RetryView it will appear to the user when we don’t have data — in this example when we don’t have anything to display
  • Loading it will appear to the user when we are waiting for the result
  • OpenEmail it will open another activity with is EmailActivity
  • OpenMobileNumber it will open another activity with is MobileNumberActivity

View Model

Now let’s create our ViewModel class that extends AndroidDataFlow

When we create our UserProfileViewModel we want to set the initial state because when we are opening our screen we don’t yet have data to set. That’s why we are sending our event to set the loading state.

Let’s focus on our next method because it will be important. Without this state, we may not have the possibility to make another action.

Well, this is important because we are setting our state and mostly because it allows the user to take actions. Without this state, we can’t take actions like OpenEmail or OpenMobileNumber, let’s take a look at those methods

Probably now you’re wondering about fromState. This is a method that will prevent you from running the logic inside the block if the state is different from UserProfileState. Doing this will give us an easy way to control the states. Guaranteeing that we are always in the expected state before starting the next one

But don’t worry it is also easy to catch error in fromState. One way offered by Uniflow is to make this “try/catch” block for you by offering a fallback lambda function, to let you handle your action in case of error

Activity

By now we’re almost done with all our logic so let’s take a look on how to implement our UI. Function onStates in our activity allows us to consume the incoming state.

As you can see, the event will trigger a method called openFragment. This method will check if the fragment currently displayed to the user is not the same as the fragment that we want to set. If true, we want to replace the current fragment for UserProfileFragment. Why do we want to do that, let’s see our sample below

Fragment

Fragments allow us to have a more flexible UI and to have the logic clearly encapsulated. As you can see it’s pretty much the same logic as in our Activity plus. We don’t need to worry about getting the last state for this fragment and update UI because Uniflow will take care of the rest.

Testing

To don’t forget about the important part which is testing let’s focus on that.

What we will need to set before we run tests:

When everything is set, we can start to write our test

That’s how easy it is to test with Uniflow. Your ViewModel has one state at a time. Testing will focus on checking state/event sequences and you can replay any state and mockup any scenario!



from Hacker News https://ift.tt/2wUaHOZ

EMM386 and VDS: Not Quite Working

The other day I set out to solve a seemingly simple problem: With a DOS extended application, lock down memory buffers using DPMI and use them for bus-mastering (BusLogic SCSI HBA, though the exact device model isn’t really relevant to the problem).

Now, DPMI does not allow querying the physical address of a memory region, although it does have provisions for mapping a given physical memory area. But that doesn’t help here–mapping physical memory is useful for framebuffers where a device memory needs to be mapped so that an application can access it. In my case, I needed the opposite, allowing a bus-mastering device to use already-allocated system memory.

As many readers probably know, VDS (Virtual DMA Services) should solve this problem through the “Scatter/Gather Lock Region” VDS function. The function is presented with a linear address and buffer size, and returns one or more physically contiguous regions together with their physical addresses.

I already had VDS working for low (DOS) memory, but I just could not get it working for “normal” extended memory. It did not matter if I used statically allocated variables placed in the executable, C runtime malloc(), or direct DPMI memory allocation functions. The VDS call succeeded and filled the result buffer with the same address I passed in, indicating a 1:1 linear:physical mapping, except the memory definitely was not mapped 1:1. So bus-mastering couldn’t work, because the addresses I programmed into the adapter were bogus. But why was this happening?

The exact same problem happened with EMM386 version 4.50 (PC DOS 2000) and QEMM 8.01. It also didn’t matter if I used the DOS/4GW or CauseWay DOS extender. The result was always the same, VDS gave me the wrong answers.

On a whim, I ran my code in the Windows 3.1 DOS box. And lo and behold, it worked! Suddenly VDS gave me the correct answers, i.e. physical addresses quite different from linear. So my VDS code was not wrong.

After more poking around, I’m not quite sure if this is a bug in EMM386 and QEMM, or in the DOS extenders. The QEMM documentation (QPI.TEC, QPI_GetPTE call) hints that for QEMM, only linear addresses up to 1088KB (1024 + 64) might have their physical address returned correctly. For EMM386 the exact logic is different but the behavior is similar: for higher linear addresses VDS does not bother translating the addresses and returns the input addresses unchanged (but does not fail!).

This is very likely why the DOS/4G(W) FAQ says (in so many words) about DMA to/from extended memory “don’t even try that, it’s not worth the trouble”. I followed the FAQ’s advice, allocated the required buffers in low memory, and hey presto, everything worked the way it was supposed to.

Since I wasn’t quite able to leave well enough alone, I had to try Jemm as well. It failed just like EMM386 and QEMM. I also tried the DOS/32A extender, but it behaved just like DOS/4GW and CauseWay–the physical addresses provided by VDS were wrong.

Using the Qualitas 386MAX-derived DPMIONE DPMI 1.0 host with EMM386 likewise did not change the outcome; VDS still wasn’t working.

On the other hand, using DOS/4GW, CauseWay, or DOS/32A on a system without a memory manager did work–because then there was a 1:1 linear to physical address mapping.

Where’s the Bug?

Windows 3.1 shows that this can work. So why doesn’t it always? The general answer is “because no one cared enough about making this work everywhere”.

VDS services need to be implement by a DOS memory manager (EMM386, QEMM, etc.) because without VDS, UMBs will break in nasty ways. VDS also needs to be implemented by a multi-tasker (like DesqView or Windows/386) because without it, DMA anywhere in DOS memory will break.

However, a memory manager only tends to really care about VDS in the first 1MB + 64KB region; typical device drivers are usable by real-mode programs and therefore keep all their DMA buffers in low memory.

The VDS specification does not say who is responsible for providing the VDS services, although the likely answer is apparent: Whoever controls the page tables–because whoever controls the page tables knows the linear:physical mappings.

In a DPMI environment, that should be the DPMI host. That is the case with Windows 3.1, but not with e.g. the QEMM DPMI implementation or with the DPMI hosts built into many or most DOS extenders.

In VCPI environments, the lines get very blurry because the VCPI host (usually a memory manager like EMM386 or QEMM) shares responsibilities with the VCPI client (DOS extender). Things get very confused because the memory manager/VCPI host implements VDS, but does nothing to take VCPI clients into account. That leads to VDS calls succeeding yet delivering incorrect data.

Is There a Way Out?

So what was that DOS/4G FAQ talking about? What are the ways of performing the linear to physical address mapping? Obviously if one had access to the page tables, it would be trivial to map linear to physical addresses. But how to actually get there?

As it turns out, the solution for DOS/4G(W) is deceptively simple. At least when not running under some other DPMI host, one can read the CR3 register–which includes the physical address of page tables–and feed the physical address to the DPMI service to map physical memory. That way, the page tables become accessible and looking up the physical addresses is not difficult.

Under the CauseWay DOS extender, it’s both more complicated and simpler. CauseWay runs application code in ring 3, so reading CR3 is harder, and apparently CauseWay also refuses to map the physical addresses of page tables. On the other hand, CauseWay always keeps an alias of the page tables at linear address 1024*4096*1023 (i.e. FFC00000h), which means the page tables are already there and can be accessed without any further action.

With a bit of legwork, and when running on a known/supported DOS extender, it is possible to do the job VDS ought to be doing.



from Hacker News https://ift.tt/2T5Ry5a

Tribute to Aaron Swartz

Comments

from Hacker News https://ift.tt/38cMYX9

How many users block Google analytics?

How many users block google analytics?

Comparing Google Analytics with server logs

I've been running Google Analytics on my blog for around a year now, and every time I look at the data it provides, I trust it less and less. Frequently, the statistics it provides are misleading, if not flat-out wrong1. And in addition to that, many people block google analytics, so I don't actually know how many views I'm getting.

A few days ago, I decided to compare my google analytics dashboard with my server logs2 to see how many people actually block google analytics. Here's what I found:

Out of 1,253 users in my sample, 565 blocked google analytics (≈45%). The breakdown by browser was as follows:

Browser % Blocking
Chrome 37%
Firefox 70%
Safari 39%

There were a few other browsers in the sample, but there wasn't enough data about them to be meaningful (There were <10 users of every other browser in my sample). I was expecting Firefox to be the largest, but it was a bit surprising to me that Safari and Chrome were approximately equal - I would have expected Chrome to be higher.

By operating system, it broke down as follows:

Operating System % Blocking
Mac 48%
Windows 49%
Android 0%
iOS 17%
Linux (non-Android) 67%
ChromeOS 53%

There was also one BSD user in the sample, who blocked google analytics.

I think the only really surprising thing about this is how high the percentage on ChromeOS is - I would have expected it to be much lower. This also shows how bad the adblocking situation is on mobile right now - I'd imagine most users who block GA/ads on desktop would also want to on mobile, but can't just because it's so difficult to set up an adblocker on mobile.

It's worth noting that my blog is absolutely not average, since I attract a much more technical audience, but the numbers should be roughly transferable to other programming blogs.

If you're in NYC and want to meet up over lunch/coffee to chat about the future of technology, get in touch with me.



from Hacker News https://ift.tt/39bkj69

Seeing the Smoke

COVID-19 could be pretty bad for you. It could affect your travel plans as countries impose quarantines and close off borders. It could affect you materially as supply chains are disrupted and stock markets are falling. Even worse: you could get sick and suffer acute respiratory symptoms. Worse than that: someone you care about may die, likely an elderly relative.

But the worst thing that could happen is that you’re seen doing something about the coronavirus before you’re given permission to.

I’ll defend this statement in a minute, but first of all: I am now giving you permission to do something about COVID-19. You have permission to read up on the symptoms of the disease and how it spreads. Educate yourself on the best ways to avoid it. Stock up on obvious essentials such as food, water, soap, and medicine, as well as less obvious things like oxygen saturation monitors so you know if you need emergency care once you’re sick. You should decide ahead of time what your triggers are for changing your routines or turtling up at home.

In fact, you should go do all those things before reading the rest of the post. I am not going to provide any more factual justifications for preparing. If you’ve been following the news and doing the research, you can decide for yourself. And if instead of factual justification you’ve been following the cues of people around you to decide when it’s socially acceptable to prep for a pandemic, then all you need to know is that I’ve already put my reputation on the line as a coronaprepper.

Instead this post is about the strange fact that most people need social approval to prepare for a widely-reported pandemic.

Smoke Signals

Most people sitting alone in a room will quickly get out if it starts filling up with smoke. But if two other people in the room seem unperturbed, almost everyone will stay put. That is the result of a famous experiment from the 1960s and its replications — people will sit and nervously look around at their peers for 20 minutes even as the thick smoke starts obscuring their vision.

The coronavirus was identified on January 7th and spread outside China by January 13th. American media ran some stories about how you should worry about the seasonal flu instead. The markets didn’t budge. Rationalist Twitter started tweeting excitedly about Rand supply chains.

Over the next two weeks Chinese COVID cases kept climbing at 60%/day reaching 17,000 by February 2nd. Cases were confirmed in Europe and the US. The WHO declared a global emergency. The former FDA commissioner explained why a law technicality made it illegal for US hospitals to test people for coronavirus, implying that we have no actual idea how many Americans have contracted the disease. Everyone mostly ignored him including all major media publications, and equity markets hit an all time high. By this point several Rationalists in Silicon Valley and elsewhere started seriously prepping for a pandemic and canceling large gatherings.

On February 13th, Vox published a story mocking people in Silicon Valley for worrying about COVID-19. The article contained multiple factual mistakes about the virus and the opinions of public health experts.

On the 17th, Eliezer asked how markets should react to an obvious looming pandemic. Most people agreed that the markets should freak out and aren’t. Most people decided to trust the markets over their own judgment. As an avowed efficient marketeer who hasn’t made an active stock trade in a decade, I started at that Tweet for a long time. I stared at it some more. Then I went ahead and sold 10% of the stocks I owned and started buying respirators and beans.

By the 21st, the pandemic and its concomitant fears hit everywhere from Iran to Italy while in the US thousands of people were asked to self-quarantine. Most elected officials in the US seemed utterly unaware that anything was happening. CNN ran a front page story about the real enemies being racism and the seasonal flu.

This week the spell began to lift at last. The stock market tumbled 7%. WaPo squeezed out one more story about racism before confirming that the virus is spreading among Americans with no links to Wuhan and that’s scary. Trump decided to throw his vice president under the coronavirus bus, finally admitting that it’s a thing that the government is aware of.

And Rationalist Twitter asked: what the fuck is wrong with everyone who is not on Rationalist Twitter?

Cognitive Reflection

Before Rationality gained a capital letter and a community, a psychologist developed a simple test to identify people who can override an intuitive and wrong answer with a reflective and correct one.

One of the questions is:

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

Exponential growth is hard for people to grasp.  Most people answer ’24’ to the above question, or something random like ’35’. It’s counter-intuitive to people that the lily pads could be barely noticeable on day 44 and yet completely cover the lake on day 48.

Here’s another question, see if you can get it:

In an interconnected world, cases of a disease outside the country of origin are doubling every 5 days. The pace is slightly accelerating since it’s easier to contain a hundred sick people than it is to contain thousands. How much of a moron do you have to be as a journalist to quote statistics about the yearly toll of seasonal flu given a month of exponential global growth of a disease with 20 times the mortality rate?

ex china cases

Social Reality Strikes Again

Human intuition is bad at dealing with exponential growth but it’s very good at one thing: not looking weird in front of your peers. It’s so good at this, in fact, that the desire to not look weird will override most incentives.

Journalists would rather miss out on the biggest story of the decade than stick their neck out with an alarmist article. Traders would rather miss out on billions of dollars of profits. People would rather get sick than be weird.

Even today, most people I’ve spoken to refuse to do minimal prep for what could be the worst pandemic in a century. It costs $100 to stock up your house with a month’s worth of dry food and disinfectant wipes (respirators, however, are now sold out or going for 4x the price). People keep waiting for the government to do something, even though the government has proven its incompetence on this matter several times over.

I think I would replace the Cognitive Reflection Test with a single question: would you eat a handful of coffee beans if someone told you it was worth trying? Or in other words: do you understand that social reality can diverge from physical reality, the reality of coffee beans and viruses?

Social thinking is quite sufficient for most people in usual times. But this is an unusual time.

Seeing the Smoke

The goal of this article isn’t to get all my readers to freak out about the virus. Aside from selling the equities, all the prep I’ve done was to stock a month of necessities so I can work from home and to hold off on booking flights for a trip I had planned for April.

The goal of this post is twofold. First, if you’re the sort of person who will keep sitting in a smoke filled room until someone else gets up, I’m here to be that someone for you. If you’re a regular reader of Putanumonit ,you probably respect my judgment and you know that I’m not particularly prone to getting sucked-in to panics and trends.

And second, if you watched that video thinking that you would obviously jump out of the room at the first hint of smoke, ask yourself how much research and preparation you’ve done for COVID-19 given the information available. If the answer is “little to none”, consider whether that is rational or rationalizing.

I could wait to write this post two months from now when it’s clear how big of an outbreak occurs in the US. I’m not an expert on viral diseases, global supply chains, or prepping. I don’t have special information or connections. My only differentiation is that I care a bit less than others about appearing weird or foolish, and I trust a bit more in my own judgment

Seeing the smoke and reacting is a learnable skill, and I’m going to give credit to Rationality for teaching it. I think COVID-19 is the best exam for Rationalists doing much better than “common sense” since Bitcoin. So instead of waiting two months, I’m submitting my answer for reality to grade. I think I’m seeing smoke.



from Hacker News https://ift.tt/2VqoOW9

Ask HN: Good ways to capture institutional knowledge?

Ask HN: Good ways to capture institutional knowledge?
1 point by alhirzel 5 minutes ago | hide | past | web | favorite | discuss
Successful companies institutionalize the knowledge of their employees; this leads to better continuity and faster on-boarding. Things like huge monorepos of useful code, internal tools, process manuals, etc. are example products of this. Young companies tend to depend on the dedication and talent of key individuals, and in maturation, must somehow make the jump to institutionalized knowledge (so that "if someone got hit by a bus" things are ok). What are some successful methods you have used or seen used to accomplish this transition? What are problems you faced (skeptics, opponents, etc.)? I am involved with an organization that is slowly growing, is about to lose key personnel, and is looking to prepare.



Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact



from Hacker News https://ift.tt/2vrJXEZ

The Technical Backstory of Retroactive


The stack trace is fascinating. It looks like some sort of access violation when initializing a font. We are going to fix this by swizzling Aperture.

0 com.apple.CoreText 0x00007fff3c6a2019 CTFontGetClientObject + 13
1 com.apple.UIFoundation 0x00007fff6b37bcfc __UIFontGetExtraData + 18
2 com.apple.UIFoundation 0x00007fff6b37cd67 -[NSFont initWithTypefaceInfo:key:renderingMode:] + 47
3 com.apple.prokit 0x0000000106955159 +[NSProFont _proSystemFontWithFontName:pointSize:fontAppearance:useSystemHelveticaAdjustments:] + 736
4 com.apple.prokit 0x00000001069b553e -[NSCell(ProAppearanceExtensions) proSetFont:] + 155
5 com.apple.AppKit 0x00007fff37a4bc03 -[NSTextFieldCell setFont:] + 47
6 com.apple.Aperture3 0x0000000105f14cc2 0x105c9e000 + 2583746
7 com.apple.AppKit 0x00007fff37a9dec7 -[NSTextFieldCell init] + 31
8 com.apple.Aperture3 0x0000000105f12901 0x105c9e000 + 2574593
9 com.apple.Aperture3 0x0000000105f22bf4 0x105c9e000 + 2640884
10 com.apple.Aperture3 0x0000000105f1636d 0x105c9e000 + 2589549
11 com.apple.AppKit 0x00007fff379fdb02 -[NSIBObjectData nibInstantiateWithOwner:options:topLevelObjects:] + 1540
12 com.apple.AppKit 0x00007fff37b66b37 -[NSNib _instantiateNibWithExternalNameTable:options:] + 647
13 com.apple.AppKit 0x00007fff37b667bb -[NSNib _instantiateWithOwner:options:topLevelObjects:] + 143
14 com.apple.AppKit 0x00007fff37b65b08 -[NSViewController loadView] + 345
15 com.apple.AppKit 0x00007fff37b61b88 -[NSViewController _loadViewIfRequired] + 72
16 com.apple.AppKit 0x00007fff37b61b05 -[NSViewController view] + 23
17 com.apple.Aperture3 0x0000000105cf30ed 0x105c9e000 + 348397
18 com.apple.AppKit 0x00007fff37b601e0 -[NSWindowController _windowDidLoad] + 624
19 com.apple.AppKit 0x00007fff37ad3718 -[NSWindowController window] + 110

Step 4. Create a new framework in Xcode to swizzle broken methods in Aperture, and fill in selectors that have been removed.

It’s swizzle time! How do we swizzle an existing app? Of course, we use Xcode.

If you don’t have Xcode yet, download Xcode, then create a new Xcode project with a macOS Framework template.

Make sure the language is Objective-C.

Create a new file.

Create an Objective-C file.

The easiest way to swizzle something is just to make a category on NSObject and call method_exchangeImplementations in +load. I’ll name this file “Swizzling”, but you can name it whatever you want.

If we want to swizzle the problematic initializer on NSProFont, it’s time to put on our (NS)Hipster hat and revise our swizzling skills.

Import the Objective-C runtime header, and let the fun begin.

import <objc/runtime.h>

Refer back to the problem report. Here’s the suspect of the crash:

3 com.apple.prokit 0x0000000106955159 +[NSProFont _proSystemFontWithFontName:pointSize:fontAppearance:useSystemHelveticaAdjustments:] + 736

Start with the most obvious idea: what if we just initialize a regular system font instead of whatever special pro font Aperture is trying (and failing) to initialize?

+ (NSFont *)swizzled_proSystemFontWithFontName:(NSString *)name pointSize:(CGFloat)size fontAppearance:(id)appearance useSystemHelveticaAdjustments:(BOOL)adjustments {
return [NSFont systemFontOfSize:size];
}

Then use the dynamic Objective-C runtime to exchange Aperture’s implementation with ours.

+ (void)load {
method_exchangeImplementations(class_getClassMethod(NSClassFromString(@"NSProFont"), NSSelectorFromString(@"_proSystemFontWithFontName:pointSize:fontAppearance:useSystemHelveticaAdjustments:")), class_getClassMethod([self class], @selector(swizzled_proSystemFontWithFontName:pointSize:fontAppearance:useSystemHelveticaAdjustments:)));
}

Some wishful thinking: If initializing the font doesn’t fail, this can probably get us over to the next step.



from Hacker News https://ift.tt/2JvAkZJ

Flutter and Dart, or how to quickly build a mobile app without losing your hair

Flutter + Dart, or how to quickly build a mobile app without losing (too much of) your hair

27/02/2020

Flutter + Dart, or how to quickly build a mobile app without losing (too much of) your hair

In this day and age there’s a steady influx of new, revolutionary frameworks, be it frontend-related or mobile. If one has been active in web development, she or he should be well acquainted with the constant oversupply of fresh, ingenious approaches and lightweight solutions to complex problems. This usually solves one issue and creates another – instead of wondering whether there is a technology that’s viable for us to use, we are currently left with the equally frustrating choice of which one of them we should pick.

This is why when I stumbled upon Flutter, I was quite interested in giving it a go – could it be a viable contender, or maybe even serve as a go-to solution that would give this dilemma at least a moment’s pause?

What is it?

Flutter is a mobile app UI SDK by Google. It utilizes the Dart VM (which boasts to be optimized for UI specifically, also by Google), giving us the opportunity to develop for both mobile and desktop devices. Dart itself can also be used for web development, even in tandem with the all-too-familiar Angular framework, but that’s a story for another day.

It provides us with AoT (ahead-of-time) compilation to native machine code, which aims at the fastest possible execution time for the completed app, without too much of an overhead.

For developers, it offers its JIT (just-in-time) compiler and the Hot Reload feature, which enables one to change the application without losing its state – which is quite nifty, as the pain of changing UI in a ‘deep’ feature and having to navigate to it with each iteration is well known to anyone who has ever worked on UI.

An important part of the SDK is, of course, its control library. As Flutter aims at developing both for Android and iOS, it gives the option of using either Material (Google, Android) or Cupertino (Apple, iOS) control set. ‘Does it mean the application switches its looks whether it’s deployed on an Android or iOS phone so it looks native on both? Sweet!’ Not really. You can use either of the libraries, or you can use both – that much is true, but there’s no uniform UI switching functionality. It can, of course, be implemented manually – and I’m not saying that it’s something not to be done ever. Bear in mind though – functionality like that implies managing two different sets of layout controls, which can quickly turn ugly and, therefore, such an approach should be made cautiously.

Everything in Flutter is, by default, a widget. If you have any experience in Angular 2+, it’s pretty much a fancy Component and should be a pretty familiar concept. This base type contains, by default, a build method which defines the look and feel and can customize it based on passed parameters and context. Widgets can be either stateless or stateful. Stateless widgets don’t undergo any discernible mutations during their lifecycle – they are mostly static. Stateful widgets, on the other hand, are built each time they are triggered to (for example, when a watched variable changes, user performs a specific action – like a click – etc.).

This would be a good time to mention that Flutter is reactive (akin to React), which means there’s no default ongoing refresh loop like in Angular. Instead once key actions are performed – the UI or part thereof (like one of its widgets) redraws itself, according to changes in state.

I’ve already mentioned Dart boasts to be optimized for UI – what does that mean, though? In this case, rich collection handling, isolate-based concurrency and async-await with futures. I’d say this pretty much tells us that the intended application of the SDK is building business apps, rather than, say, games. It’d be wrong to assume that people won’t explore making games using Flutter though, there’s even a 2D game engine. The point I’m trying to make is that this mode of application seems like a perfect fit for this specific set of features, and that’s the angle I’ve decided to explore myself.

The risks

Although this all sounds quite good, there’s also a flip side.

Firstly, Flutter is still a fledgeling SDK. It is mature for its age, but it should be noted that it’s alpha launched in May 2017, while 1.0 was released in December 2018. Which means that at the point in time when this post is being written, it’s still just a one year old release. What are the consequences? The community – while sizable – is still not quite up to par with those of currently mainstream technologies. This affects the ability to find solutions for some common problems, and you might hit a dead end on more than one occasion – requiring from you additional work and going through specs. However, Flutter is well documented, and the community is ever-growing, so we might mark it up as a ‘work in progress’ sort of thing, rather than a distinct flaw.

Secondly (and this can be seen as both a flaw and an advantage), both Flutter and Dart come from Google. The good part of it is that Google is a tech giant, and if they want to maintain something, they have the resources and manpower to do it. The bad part is that while Google is known to introduce useful tech and services, it’s also known to kill off or retire them when they’re deemed obsolete. That’s why there’s always the risk that Flutter might eventually end up here, but that might not happen soon, maybe even not for another couple of years. So yes, it’s a risk, but then again – it’s the same for any relatively new technology, and every tech has its beginnings.

What tools do we get?

Flutter can be developed from within the most common programming IDEs – we’ve got Android Studio, IntelliJ IDEA, there’s also a Visual Studio Code plugin – which means that most developers won’t have to stray too far from their default environment. In my case, as recently I’ve been doing more web-oriented work, the choice was VS Code, but this shouldn’t affect the development in any meaningful way as text files are still fortunately just text files. The target platform will be Android (the reasons for this choice are quite down-to-earth – I simply neither own an iPhone, MacBook, nor even an iMac), so it looks like I’ll be installing Android Studio anyway – for its VM.

Aside from the IDE there are also the Flutter/Dart DevTools, which are a suite designed to monitor the app’s performance and provide some debugging instruments, like the Flutter inspector, which acts pretty similar to its WebTools counterpart. The real-time resource monitor is potentially a huge help in finding the app’s performance bottlenecks and the hierarchical inspector – in seeking out possibly redundant nestings, which plague UIs of many apps and websites alike.

Getting started – our ‘Hello World’ is all business

What’s more exciting than writing a mobile application for managing your insurance policies? It’s important to note that I might try to do some things in different ways, resulting in code inconsistencies. The app is supposed to contain some ideas and examples on how to solve common scenarios, without deciding on which one is best.

A simple overview of the app-to-be:

  • Has a ‘home’ screen with bought policies summary
  • Lets you register a policy
  • Policy registration is achieved via a wizard
  • Policies can be of different types
  • Insurance subjects can be of different types
  • User has an account (no anonymous use)
  • The app is a ‘light client’ – all dictionaries, data and operations are stored server-side
  • Request/response body format is JSON

Our API will be simulated by Mockoon, I’ll be using VS Code as the IDE, and the device will be provided by Android Emulator (I’ve settled for a Nexus 6 API 28). As a starting point I’m using the official guide and empty app created according to Flutter’s official website, followed up to the point where we have a barebones Flutter project. In my case it got me the structure on fig.1. You’ll find the complete app code here, and I suggest browsing it alongside this post, as it will be referred to all the time. This part of the post is, after all, mainly about pointing out potentially useful chunks of code and the purpose they could serve.

fig.1. You’ll find the complete app code here, and I suggest browsing it alongside this post, as it will be referred to all the time. This part of the post is, after all, mainly about pointing out potentially useful chunks of code and the purpose they could serve.

Fig. 1: The initial state of the project

Taking a look around and inspecting the foundations

The file pubspec.yaml holds the project’s dependencies, assets and version number – pretty straightforward. It’s also seeded with lots of informative comments, but we won’t be doing much work with it, at least not on a daily basis. What’s most important for us is the lib folder, as there lies the root of our application, the main.dart file, and in it – the main() method. This is the entry point to our application, and no code should go beyond that point. Alright, time for some scaffolding.

The homepage is the default view, or route, of our application. There’ we’ll be displaying a bunch of policies. So, surely getting the dictionaries from our api would seem appropriate. I’ve built a singleton service that calls the API service and makes the dictionaries available before the application even starts, so that the data is readily available wherever in the application we end up. It’s called CommonData, and the dictionaries API service – DictionariesService. Both are located in the lib/services folder. I’ve also added a helper service (called Helper, another naming masterpiece) for universally used functionality, like a default padding, common conversions etc.

Fig. 2: commonData.dart

CommonData (fig.2) is a singleton with an internal constructor, which stores its only instance within a static field of itself. In the app we won’t be using the CommonData class definition anywhere else – only its commonData instance declared in this file. The DictionariesService.get() method returns a Future<DictionariesService>, which is basically a promise. This means we can either await its result and continue with code execution of initialize() once everything’s ready, or use a .then(…) and return early. We want initialize() to finish once we’ve received a response, so we’ll use await. We’ll get to the implementation of DictionariesService.get() later.

After a bit of research it turned out hooking commonData.initialize() to run before the UI even gets drawn is quite trivial – it’s enough to place it in our main() (fig. 3).

Fig. 3: commonData gets initialized before we even run the app itself

This way wherever we are in the app, we’ll always be sure commonData was initialized, as the app itself is executed AFTER initialize() completes. Such a solution could be useful in many cases, like a server-stored application profile or theme, data staging, application setup etc. In case of asynchronous operations, thought, we should be handling them on the home screen, where we can display some sort of loading indicator (which we’ll see in action as well). This would prevent the user from seeing a blank screen on startup and wondering whether the application crashed. That’s why if we absolutely have to do something before the app properly starts up, it’s probably best to stick to operations with a predictable, negligible execution time or create a separate ‘loading’ screen with some animation and a clear ‘loading’ message to put the user at ease, do it there and navigate home upon completion. I’m leaving this ‘awkward preload’ in the application as a sort of a UX anti-pattern.

Let’s take a look at the MyApp class, located just below main(). Its body is mainly the overridden build(BuildContext) method – which is called every time the MyApp widget is being redrawn. Our app has more than one screen – home and 5 steps of the policy registration wizard (policy type, product, covers, owner, and subject), hence I’ve conducted a careful study of the subject in question (fig. 4).

Fig. 4: Flutter application navigation research

So, navigation in Flutter is called ‘routing’. I’ve created some routes according to one of the many tutorials (fig. 5). A default, initial route – this is our MyHomePage widget – and five wizard steps. We’ll see if we will need to access the build context, but it’s nice to have it on standby.

Fig. 5: basic routing in a Flutter app

Flutter in a Material world

This is a good moment to mention that, because our app uses the Material control set and is a MaterialApp instance, we can quickly change its aesthetics following the Material Design principles. The ThemeData class contains ‘color and typography data for a material design theme’. It can be accessed within the application via a static method: Theme.of(BuildContext) and hooked up to various properties if we need to change their default, theme-driven value. For now we’ll just set the primarySwatch (the leading color of the application and its various shades) and the accentColor (also an assembly of color shades, the app’s de facto secondary color). If we stick to using the theme’s defaults and/or generated values (which we will try to do), we should end up with a more or less visually appealing UI. If we don’t want to use the default color swatches, we can easily define our own (fig.6). It’s a lot of conceptual work though (unless we’re given a style guide by the client), and I would like to avoid creating some sort of aesthetic abomination, so I’ll keep it simple. There is also a myriad of material color swatch generators on the web that offer the option of generating one if you provide the ‘primary’ shade. There is an option of setting the errorColor, but as a person that has had his toe stuck in the UI/UX field, I advise you to approach it with caution – the standard red is pretty much the error indication industry standard. Avoid changing it if the color scheme allows us to do so, maybe change the shade just a bit?

This is also a fun way of testing the Hot Reload feature – try changing theme colors, save, and then see the app change before your eyes. For me, it’s quite satisfying.

Fig. 6: an example custom color swatch.

Homepage

The homepage is basically a list of tiles which represent individual policies and expand to show their details, there is also an option of registering a new policy. The tile should therefore be stateful, as its look mutates, but the page can remain stateless. Yes, it displays a list of a variable length, but its elements – nor their values – do not change during its lifecycle. Note that if we didn’t separate the tiles into standalone widgets (and instead handled everything in one monster of a class) then it would have to be stateful.

Fig. 7: MyHomePage data initialization

Let’s start with the data needed for our route (fig. 7). Whatever logic we place here will be executed each time we navigate to ‘/’. In this case it’s convenient – each time we end up on the home screen, we’ll have up-to-date account data and a list of registered policies. That way we’ve already solved a problem we’d be facing in the future: how to refresh the home screen after completing the wizard; Now all we have to do is navigate back.

Inside MyHomePage (homepage.dart) you can finally see some UI definition. The root of our page is an aptly named Scaffold, which lets us set an app bar, an action button, the body of the document and various other options – effectively a template for a general purpose mobile app. If undefined, the part will be omitted (i.e. no footer = no footer, not an empty footer). The appBar is minimal, there’s a floatingActionButton to initiate the new policy wizard, backgroundColor has been hooked up to the current theme’s backgroundColor (to maintain consistency if we decide to change colors), and there’s of course the meat of the matter – the body.

The policies, as noted earlier, are wrapped in a Future – they aren’t ready to be passed along to a simple ListView. That’s what FutureBuilder<> is for: It’s in fact a widget, that returns content based on a Future’s internal state. Using the snapshot (AsyncSnapshot) variable we can return different widgets depending on whether the Future has already finished or is still in progress, or if it contains an error and so on. In our case we’ll return a ListView if it’s done, and a loading indicator if it isn’t – pretty standard stuff. It could probably be a good idea to wrap any possible error handling for this into some sort of a universal method in the Helper class that accepts the snapshot.connectionState and outputs some generic error, there are many options on how to solve borked Futures – here, for the sake of brevity, I’m using none of them. It’s done or it’s loading.

Fig. 8: FutureBuilder in action

Moving on to the HomepageTile widget – our first stateful UI part. Every StatefulWidget consists of the widget declaration (fig. 9) and its state – and the state is where the magic happens.

Fig. 9: The StatefulWidget. Not much to look at here – practically a state factory

The UI of the widget is defined in the state, in its build method. There, every use of setState(fn) tells the framework to rebuild, reevaluating its build(BuildContext) method with updated property values. Here I’m using the _expanded field value as a condition whether I return the mini _buildMiniTile() or verbose _buildMaxiTile() widget version. It could, of course, be a matter of just a simple conditional assignment, but let’s make it look better with an AnimatedCrossFade widget. It does exactly what it says on the tin – it crossfades one child with another according to its crossFadeState (fig. 10). Thanks to the fact that on each setState the widget gets rebuilt it’s possible to juggle between more than two states, but it’s a rather unusual scenario – getting to a state with a specific number of taps sounds a bit like teasing the user or playing a ‘hidden object’ game unless very strongly visually implied.

Fig. 10: AnimatedCrossFade, driven by the _expanded value

Alright, so we know how to create a home screen with generic tiles mapped to user’s policies. The time has come to see how the app is being fed the data we’ve got set up in our Mockoon API. For this purpose we should open up the DictionariesService (fig. 11).

Fig. 11: DictionariesService – not much, but enough

As you can see, the get() method is marked as async – this means that whatever we return will be wrapped in a Future<>, to accommodate its promise-like handling. The http client executes our command asynchronously and provides the response, status code and all. Just below we’re mapping json (whose type is by default Map<String, dynamic>) to our DTO objects. Since these are dictionaries, I’ve taken the liberty of creating maps for them so we won’t have to iterate through all the entries when we need to display a name corresponding to a specific code (i.e.: commonData.maps[DictCode.PRODUCT_TYPE][_policy.type]).

Next, let’s take a look at our DTOs. There’s no ‘default’ option of turning json into objects, but fortunately there are plugins. In my case it’s json_annotation which, once started as a watcher with ‘flutter packages pub run build_runner watch’ will look for the @JsonSerializable annotation and create mapping functions – as we can see on fig. 12-13.

Fig. 12: policy.dart – our DTO class

Fig. 13: policy.g.dart – generated by json_annotation

 

This simplifies things greatly for us and provides an easy way to map our classes both to and from json.

The Wizard

Two important parts of almost every conceivable business app out there are forms and validation. Let’s see what we’re working with while checking out code for the insurance policy wizard. The first two steps (1_newPolicyType, 2_newPolicyProduct) are all pretty standard stuff found in all the other ones so i’ll be skipping them. If you want to see an example how to asynchronously perform a calculation while the use is filling the data in, check out the 3_newPolicyCovers step – it contains a dummy implementation for one of life’s greatest mysteries, premium calculation.

Giving our app Form

Definition of forms looks pretty standard – we define a Form object, handle it a pre-generated GlobalKey<FormState> key and then define its elements, as seen both in the 4_newPolicyYou.dart file and on fig. 14.

Fig. 14: 4_newPolicyYou.dart – pretty straightforward form declaration. Note the use of Helper to minimize code clutter.

The form can interact with the data in many ways, so it’s possible to design it in line with the developer’s preference. If we want a pseudo two-way-binding behavior, we can persist the value in the onChanged handler inside a setState wrapper. We can, however, just use the dedicated onSaved and persist the data once the form is all ready – which is the course I’ve decided to take. The Step4Builder class (fig. 15) holds the ever-present wizard sequence – if form is valid, save and move on. Injecting data into the form is handled with ease – since we’re passing values from the model (processData) into respective controls’ initialValue, they will update with each setState operation. That’s why we can simply fill the model’s fields (processData.setOwnerFromAccount) and then reset the form using its key (this._formKey.currentState.reset) which will cause it to reevaluate the initial values of the fields – grabbing them straight from the model. Why reset the form at all though? This will ensure that the fields we didn’t fill in setOwnerFromAccount get assigned their default values, which will still be in the model, as long as we don’t persist the form-stored values.

This is just one strategy – in different scenarios we might encounter different preferred solutions, but it’s easy to notice that we aren’t forced to deal with them in a specific way.

Fig. 15: Step4Builder – if valid, save and go to the next step. Nice and clean.

Achieving dynamic form layout does not differ much from classic js/html shenanigans. On the wizard’s final step, 5_newPolicySubject.dart, we’re supposed to register data of the policy’s subject, which implies the use of different forms depending on its type (a car, a person, a lizard, and so on). We’ll achieve that by defining different fieldsets defined in separate widgets and showing the ones that fit our choices in the previous steps. In the application there’s only one type implemented (reptileObject.dart), but more can be added simply by performing a check in the build method (fig. 16).

Fig. 16: 5_newPolicySubject.dart: since all I’m thinking of is insuring my pet which (presumably) is a lizard, the only available form definition is the ReptileObject, but nothing prevents us from inserting an IF statement into the child property that will supply the correct one.

Okay, so we’ve got some textboxes and a dropdown – time for everyone’s favorite control, the date control… which does not exist. This may seem strange if we’ve never had a chance to develop a mobile app but, when we think about it a little longer, it makes perfect sense. It’s always preferable to use the native system’s method of input (for example, we do not define our very own special keyboard-control-3000-XP, we just use the one provided by the system), and each mobile system has its own date input method in the form of a calendar. That’s why our ‘date input’ will be just a read-only TextFormField, which will ask the system to supply its value when we touch it. An example is contained within the aforementioned reptileObject.dart file (fig. 17-18).

Fig. 17-18: reptileObject.dart – The TextFormField has a very limited amount of responsibility – to tell the system that the user needs to input a date, and to display the result of this action. We’ve defined an onTap handler to intercept any attempt to interact with the control and make it show the system datepicker instead. As it’s an asynchronous action (the user can take as much time as required), the whole method is appropriately marked.

Validation – let’s put that errorColor to work

Now that the form is in place, all that’s left is to provide some rudimentary data validation. The convention is quite simple: each form control has a ‘validator’ property, which accepts a function, with the value as the input and a string as the output. If the output is non-empty, its contents (which are considered a validation error message) are shown in the appropriate area. A simple example of combined validation (two criteria, two messages) can be seen on fig. 19.

Fig. 19: A simple validation – if Validations.required returned an error message, return it. Otherwise check if input is a valid email address. If not, return our custom message.

This is all good and well, but what in case we have to perform an asynchronous validation, like a username availability check? Well… tough luck. Flutter does not support Future<> in validators, and most likely it never will, as it was stated that it could break sync validation, and mixing the two isn’t a good UI practice anyway because reasons.
Even if we take this as a fact, it does not prevent us from facing a scenario in which we simply must perform a validation server-side, with the only alternative being loading gigabytes worth of data into the device’s memory. Fortunately, there’s a sort-of-accepted workaround which is pretty simple. In the validator, we perform the call and toggle a local flag. If the flag is up, we don’t display any validation message. Upon the call’s completion, we save the result into some local variable, toggle the flag and manually trigger the form’s validation. This way the first time the validator is triggered no message is shown (or we might show a ‘please wait…’ to indicate an action in progress), and the second time it changes to the action’s result (which would have to override the hypothetical ‘please wait’). The whole process – with an example – is available in the linked post.
So yes, while async validation is possible this way, it stands to reason to expect the SDK to support it out of the box. It’s possible, but it could be cleaner.

Nevertheless, we prevailed – our app is up and running, and it didn’t even take much time, all things considered. We’ve covered most of the basics when implementing a business app, and there weren’t really any roadblocks – all in all, I think we could mark it off as a success. We’re ready to remove the loading anti-pattern, clean up, integrate with a backend, and do a complete rework after the first round of customer feedback.

Let’s have a look at this beaut:

Home screen, default and expanded

Covers selection, better get that ‘bad puns’ one

Policy holder data, with an autofill option

Policy subject screen, count these legs carefully

Final thoughts

So, should you use Flutter for creating a mobile app? To decide that, I think one should consider a few things in making this decision – and for different people the final answer may vary.

If the mobile app you’re about to write is your first of its kind – I’d say go for it. Flutter has got a quite accommodating learning curve, and does not require any obscure knowledge. The tutorials and materials available make it pretty easy to determine what can we use in specific scenarios, and what tools we have at our disposal. When learning a framework that’s been around for years, some practices can be deemed too obvious to describe, consequently making them very hard to find out about. As it’s a relatively new tech, no question is too obvious, and there aren’t many oh-everybody-knows-that tricks buried forever under a mountain of new feature issues. For an experienced mobile developer, on the other hand, things stand as with every other new technology. When committing to write an advanced, multi-feature app, the bigger it is, the better it is if you’ve got any experience in the technologies used. However, if you’ve got a small app to write, Flutter might prove an invaluable tool in rapid development.

In terms of the community, it’s still growing. It’s not overwhelmingly vast, but it’s not miniscule either. Opinions on this may differ, but I think its current size warrants small- to mid-sized app development. The bigger the user base, the more edge cases have been researched, and the bigger the chance you’ll find help when in need, so since the community is steadily growing, large-scale projects are becoming more and more viable, and less and less of a risk – provided it won’t be killed off.

There are currently many apps developed with flutter – it’s not an exotic, niche framework anymore. As seen on the official website, not only Google uses it, but some big-brand companies as well. This bodes well for the technology’s support plan, and is quite an enticement to at least giving it a try. Considering how relatively fresh the tech is, that these companies had probably had to do a little RnD before greenlighting a public app, and they still went with it – It does not seem like using it is that much of a risk anymore. It’s certainly viable and, given time (if at some point it won’t get bogged down with a hefty overhead and overly-complicated architecture), it has a chance of becoming a go-to solution for mobile apps.

As we all know, the market can be fickle, trends change and all that… But that should never stop us from exploring the new. And, in the end, Flutter seems worthy of our time.

Author: Wojciech Kuroczycki, Lead Developer

Czy podobał Ci się artykuł? Jeśli tak, udostępnij go w swojej sieci!


from Hacker News https://ift.tt/39bKqtn