High Throughput Experimentation ( HTE ) Directed to the Discovery , Characterization and Evaluation of Materials

We attempt to take a strategic view of the development and application of HTE techniques across a broad spectrum of chemical, material and earth sciences, looking for unifying assumptions and approaches. We consider why much of the development of HTE technologies and techniques, as well as the majority of their application, have taken place in industry or in institutes or centers working closely with industry. And we look for commonalities and synergies across diverse HTE application areas, taking examples from the energy, catalysis, formulations and biotechnology fields. Résumé— Expérimentation à haut débit pour la découverte, la caractérisation et l’évaluation des matériaux—Nous nous efforçons d’établir une vision stratégique du développement et de l’application des techniques d’expérimentation à haut débit (High Throughput Experimentation, HTE) dans de larges domaines des sciences chimiques, des matériaux et de la terre, en recherchant à unifier les hypothèses et les approches. Nous analysons pourquoi la plupart des technologies et techniques de HTE, ainsi que la majorité de leurs applications, sont développées dans l’industrie ou dans des centres et instituts de recherche travaillant en étroite collaboration avec l’industrie. Nous examinons aussi les points communs et les synergies entre les divers domaines d’applications de l’HTE, à partir d’exemples des domaines de l’énergie, de la catalyse, de la formulation et des biotechnologies. INTRODUCTION – HIGH THROUGHPUT EXPERIMENTATION (HTE) DRIVERS High Throughput Experimentation is an approach to directed discovery and development that is engineered to provide multiple-fold efficiencies over conventional methods. As evidenced by the other contributions in this collection, High Throughput Experimentation (HTE) is being applied across a very broad range of areas. Even within the space of materials and processes of relevance in the energy field, the problems being addressed, the workflows being devised, and the techniques being developed are hugely diverse. The definition of HTE is thus concomitantly loose and broadly encompassing. Even though applications are diverse, certain principles are typically common (Fig. 1). Usually, at the outset we have a specific objective; we have a specific problem to solve. We are seeking a material the properties of which we can reasonably articulate. In certain application domains this articulation might be termed a ‘target product profile’. Or, we might be seeking a process that conforms to a defined set of requirements. This directed nature of HTE is one of the reasons why much of the development of HTE techniques and many of its practical applications have taken place in Oil & Gas Science and Technology – Rev. IFP Energies nouvelles, Vol. 70 (2015), No. 3, pp. 437-446 J.M. Newsam, published by IFP Energies nouvelles, 2014 DOI: 10.2516/ogst/2014040 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. industry or in centers or institutions closely aligned with industry. Typically, also, we have a vast compositional field to consider (Fig. 1). There is a combinatorial explosion of compositional possibilities. In an inorganic material space we might seek to sample not just binary elemental combinations, but ternaries, quaternaries, or quinternaries developed from a significant slate of elements (either as chemical derivatives such as oxides, sulfides, halides, hydroxides etc. or as the raw elemental combination); in a formulation space we might have 10-40 discrete ingredients to cross-combine, with the selection of each of such ingredients and their relative proportions to sample. An isolated smallmolecule entity is defined by itsmolecular structure. However, even in a small molecule system a vast number of molecular structuresmight share the same chemical formula [1]. The elemental composition alone does not define a material. Taking a material systemwith even a nominally simple unary composition, Cn, there is no end of ways of combining cubic (as in diamond) and hexagonal sheet stackings (as in lonsdaleite) for a crystal structure developed from all tetrahedrally-coordinated (sp) carbon, or then of truncating such a crystal in differing morphologies or dimensions. There is then a further infinite space of possible combinations of tetrahedral (sp), and trigonal (sp) hybridizations arrayed in 3-dimensions. Then there is the infinite space of possible fullerenes, aggregated further into 3-dimensional arrangements in a further infinite number of ways. Then there is the myriad of graphites and graphenes. In any aggregate, such as a solid state material or a complex fluid, the details of the structure, including, for example, the manner of assembly, the defects within it and the nature of its truncation at the perimeter, are part of the definition. The structure is developed under the processing conditions; differing processing conditions (applied to other than discrete molecule systems) typically result in differing physical structures at the atomic, nano-, mesoor micro-, or macro-structural levels. Further, the required performance of the targeted material or process almost never reflects the value of just a single parameter; we have multiple simultaneous requirements (Fig. 1). In a system’s view of materials [2, 3], the overall performance reflects the values of a given set of attributes (properties) of the given material. The material’s properties reflect the structure, at atomic molecular level, but also at nano-, meso-, micro-, or macro-structural scales. These structural aspects are developed under the processing conditions, based on the initial composition (or synthesis parameters). This dependence of properties on structure and then indirectly on processing, while an opportunity, is also a very practical constraint on our HTE engineering designs. We rarely know how processing affects microstructure. And, in a field like heterogeneous catalysis, the catalytic properties may derive from ‘defect’ sites at lowconcentration in the bulk or on the surface, the concentration and nature of which may, in an opaque manner, be quite sensitive to the preparative conditions. Finally, in the composition-processing parameter space we have defined, we cannot predict where the optimum, or where minima or maxima acceptable to within reasonable acceptance criteria, will lie (Fig. 1). We need to sample the space and, today, we must first sample the space at discrete points. This lack of predictability does not imply that we cannot compute the properties of the optimum and by simulation identify it as better than other material options; it may in fact be that our sampling is purely computational (the term ‘HTE’ is, after all, not high throughput experiment, but experimentation, encompassing application also of simulation). It is simply that no matter which current method(s) we choose to deploy in sampling the overall space, whether by experiment, by simulation or by a combination of the two, we have no a priori knowledge of the location of the optima (if we had, no experimentation campaign, whether by HTE or by conventional methods, would be needed). 1 RECENT EXAMPLES IN MATERIALS FOR ENERGY APPLICATIONS The general drivers for deploying an HTE approach are illustrated in quite a number of recent publications. We select here, somewhat arbitrarily, HTE studies published early in 2014, so immediately prior to this conference. HTE has been applied with some success to the discovery and development of both homogeneous and heterogeneous catalysts [4-16]. As several recent examples are also discussed elsewhere in this volume, we look here to examples from other fields. The deployment of new, environmentally friendly energy technologies often depends on the discovery and development of new functional materials for specific endapplications. For example, efficient conversion of solar − Directed towards specific objective − Face vast compositional landscape − Have broad space of processing options − Multiple criteria determine ‘performance’ − Optimum (optima) not predictable


INTRODUCTION -HIGH THROUGHPUT EXPERIMENTATION (HTE) DRIVERS
High Throughput Experimentation is an approach to directed discovery and development that is engineered to provide multiple-fold efficiencies over conventional methods.
As evidenced by the other contributions in this collection, High Throughput Experimentation (HTE) is being applied across a very broad range of areas.Even within the space of materials and processes of relevance in the energy field, the problems being addressed, the workflows being devised, and the techniques being developed are hugely diverse.
The definition of HTE is thus concomitantly loose and broadly encompassing.
Even though applications are diverse, certain principles are typically common (Fig. 1).Usually, at the outset we have a specific objective; we have a specific problem to solve.We are seeking a material the properties of which we can reasonably articulate.In certain application domains this articulation might be termed a 'target product profile'.Or, we might be seeking a process that conforms to a defined set of requirements.This directed nature of HTE is one of the reasons why much of the development of HTE techniques and many of its practical applications have taken place in industry or in centers or institutions closely aligned with industry.
Typically, also, we have a vast compositional field to consider (Fig. 1).There is a combinatorial explosion of compositional possibilities.In an inorganic material space we might seek to sample not just binary elemental combinations, but ternaries, quaternaries, or quinternaries developed from a significant slate of elements (either as chemical derivatives such as oxides, sulfides, halides, hydroxides etc. or as the raw elemental combination); in a formulation space we might have 10-40 discrete ingredients to cross-combine, with the selection of each of such ingredients and their relative proportions to sample.
An isolated small molecule entity is defined by its molecular structure.However, even in a small molecule system a vast number of molecular structures might share the same chemical formula [1].The elemental composition alone does not define a material.Taking a material system with even a nominally simple unary composition, C n , there is no end of ways of combining cubic (as in diamond) and hexagonal sheet stackings (as in lonsdaleite) for a crystal structure developed from all tetrahedrally-coordinated (sp 3 ) carbon, or then of truncating such a crystal in differing morphologies or dimensions.There is then a further infinite space of possible combinations of tetrahedral (sp 3 ), and trigonal (sp 2 ) hybridizations arrayed in 3-dimensions.Then there is the infinite space of possible fullerenes, aggregated further into 3-dimensional arrangements in a further infinite number of ways.Then there is the myriad of graphites and graphenes.In any aggregate, such as a solid state material or a complex fluid, the details of the structure, including, for example, the manner of assembly, the defects within it and the nature of its truncation at the perimeter, are part of the definition.The structure is developed under the processing conditions; differing processing conditions (applied to other than discrete molecule systems) typically result in differing physical structures at the atomic, nano-, meso-or micro-, or macro-structural levels.
Further, the required performance of the targeted material or process almost never reflects the value of just a single parameter; we have multiple simultaneous requirements (Fig. 1).
In a system's view of materials [2,3], the overall performance reflects the values of a given set of attributes (properties) of the given material.The material's properties reflect the structure, at atomic -molecular level, but also at nano-, meso-, micro-, or macro-structural scales.These structural aspects are developed under the processing conditions, based on the initial composition (or synthesis parameters).This dependence of properties on structure and then indirectly on processing, while an opportunity, is also a very practical constraint on our HTE engineering designs.We rarely know how processing affects microstructure.And, in a field like heterogeneous catalysis, the catalytic properties may derive from 'defect' sites at lowconcentration in the bulk or on the surface, the concentration and nature of which may, in an opaque manner, be quite sensitive to the preparative conditions.
Finally, in the composition-processing parameter space we have defined, we cannot predict where the optimum, or where minima or maxima acceptable to within reasonable acceptance criteria, will lie (Fig. 1).We need to sample the space and, today, we must first sample the space at discrete points.This lack of predictability does not imply that we cannot compute the properties of the optimum and by simulation identify it as better than other material options; it may in fact be that our sampling is purely computational (the term 'HTE' is, after all, not high throughput experiment, but experimentation, encompassing application also of simulation).It is simply that no matter which current method(s) we choose to deploy in sampling the overall space, whether by experiment, by simulation or by a combination of the two, we have no a priori knowledge of the location of the optima (if we had, no experimentation campaign, whether by HTE or by conventional methods, would be needed).

RECENT EXAMPLES IN MATERIALS FOR ENERGY APPLICATIONS
The general drivers for deploying an HTE approach are illustrated in quite a number of recent publications.We select here, somewhat arbitrarily, HTE studies published early in 2014, so immediately prior to this conference.HTE has been applied with some success to the discovery and development of both homogeneous and heterogeneous catalysts [4][5][6][7][8][9][10][11][12][13][14][15][16].As several recent examples are also discussed elsewhere in this volume, we look here to examples from other fields.
The deployment of new, environmentally friendly energy technologies often depends on the discovery and development of new functional materials for specific endapplications.For example, efficient conversion of solar  Some key drivers for bringing HTE to bear.energy to fuels requires the discovery of new electrocatalysts, particularly for the Oxygen Evolution Reaction (OER).The search for higher-performing electrocatalysts that comprise only earth abundant elements provided the driver for an HTE campaign based on a workflow combining synthesis and screening [17].High resolution inkjet printing was used to produce 5 456 discrete oxide compositions containing the elements nickel, iron, cobalt and cerium (precursor inks for each of the four metals were printed in an array on a conductive substrate, at density corresponding to 3.8 nM of metal in each 1 mm 2 array spot, and then converted to the corresponding mixed metal oxide by calcination of the array in air at 350°C).
A custom Scanning Droplet Cell (SDC) was next used to provide an individual 3-electrode cell for each array spot in turn (including conducting substrate, capillary Ag/AgCl reference electrode, and platinum wire counter electrode) in O 2 -saturated 1.0 M NaOH(aq); chronopotentiometries over 10 s at 10 mA.cm À2 and 0-440 mV overpotential cyclic voltammetries were measured.
Two interesting novel compositions were discovered (Fig. 2), Ni 0.5 Fe 0.3 Co 0.17 Ce 0.03 O x and Ni 0.3 Fe 0.07 Co 0.2 Ce 0.43 O x , both verified by resynthesis on glassy carbon rods.The pseudo-ternary composition Ni 0.2 Co 0.3 Ce 0.5 O x derived from the latter 'high-Ce' electrocatalyst was then prepared by electrodeposition and found to provide a 10 mA.cm 2 oxygen evolution current at 310 mV overpotential [17].In addition to topical interest in OER electrocatalysts, another reason for citing this specific example is that, as evidenced in Figure 2, the two compositional fields that yield attractive OER performance are separated in the phase field by a 'valley' of less promising performance.A simple gradient-based search procedure starting in, say, the 'low-Ce' region would have missed the still more effective high-Ce composition.
A second example from the energy field, also published earlier in 2014, considers the development of organic redox couple materials for use in flow batteries [18].In contrast to batteries with solid electrodes which can maintain discharge at peak power for only a limited period, flow batteries in which all electroactive species reside in fluid phases can support independent scaling of power (scaling with electrode area) and energy (scaling with storage volume, that can then be arbitrarily large).To be practical, though, we need to achieve reasonable power densities and suitably fast electrochemical kinetics.The redox-active metals and preciousmetal electrocatalysts that have historically been required prove too costly.Huskinson et al. [18] describe a metal-free flow battery that exploits the two-electron two-proton reduction of 9,10-AnthraQuinone-2,7-DiSulphonic acid (AQDS) on a glassy carbon electrode in sulfuric acid, in conjunction with the Br 2 /Br À redox couple.AQDS can be produced cheaply and its solubility and reduction potential can be modulated through suitable functionalization (Fig. 3).
Thus, incorporation of electron donating hydroxy groups into the anthraquinone backbone of AQDS is expected both to lower the reduction potential, E 0 (then increasing the cell voltage), and to alter the solvation free energy.Huskinson et al. [18] used first principles and parameterized  calculations to compute these quantities for some 34 AQDS derivatives [18] (Fig. 3) with differing patterns of hydroxyl substitution (the total free energy of a given derivative was computed using density functional theory, the generalized gradient approximation, and the 1996 Perdew-Becke-Ernzerhof functional; the projector augmented wave technique and a plane-wave basis set provided in the VASP program were employed.The reduction potential was derived from the computed heat of formation of hydroquinone at 0 K from the quinone and hydrogen gas, DH f , through a correlation between DH f and E 0 calibrated by experimental data on six quinones; the solvation free energy was calculated using a Poisson-Boltzmann solver [18]).

SIMULATION -EXPERIMENTATION COMPLEMENT: SAMPLING EXPERIMENTALLY INACCESSIBLE MATERIALS
The two typical HTE campaigns cited above evidence a typical experimental campaign and a not atypical simulation effort.Both are used to screen a library (that of the latter simulation example is much smaller on this occasion than that of the former, experimental one) of prospective materials for key performance-determining properties.A point to underscore in such comparison is: that we rely equally and with as much confidence on the simulation results as those obtained experimentally; that the two offer complementary strengths.
Experimentally, it can be hard to simplify the system under measurement.We rarely have the luxury of varying just a single variable and considering the impact of that one change on properties.We lack that level of control over synthesis and processing.In counterpoint, with simulation the level of challenge typically increases with system complexity.Experimentally, by definition we are restricted to observation of the actual, real surface.With simulation, however, we can, at least as readily, sample experimentally inaccessible configurations.Of course, just as there is a risk of overlooking or misinterpreting experimental observations, without a definitive practicality constraint, simulation can verge from sensibleness, for any of a number of reasons (software bugs; unsuitable choices of methodology, model parameters, basis set or functionals; inappropriate or overly-limited base models; sampling local minima but not the global minimum etc.).But, a major appeal of simulation is that we can indeed assess materials or configurations that cannot be sampled, or which would be prohibitively costly to sample, by experiment.We can ask questions as to the importance of particular classes of interactions, as to the effect of changes in internal (such as composition, structural arrangement) and external variables (pressure, temperature, flow, etc.).
A now historical example is a study by crystal mechanics that probed the geometrical effects of Al-for-Si T-atom replacement (T = tetrahedrally coordinated framework cation) in the MFI-framework [19] of the commercially important zeolite ZSM-5 [20].The single negative framework charge introduced by the Al 3+ for Si 4+ substitution is compensated by a countercation, such as a TetraPropylAmmonium cation (TPA + ) residing within the pore system, or a proton bound to one of the four bridging oxygen atoms adjacent to the Al site in the model.The accessibility and the chemical characteristics associated with the Al site depend on its location in the framework but there are few data to indicate either the details of this dependence or the Al T-site 'preference' in real materials that result from a particular set of synthesis conditions.
In the simulation campaign [20], a library of models was developed comprising Al for Si replacement at, in turn, each of the crystallographically inequivalent T-sites in the orthorhombic description of the ZSM-5 structure (Fig. 4) (a monoclinic description of the same MFI-framework derives through distortion from the orthorhombic form, but the topology of the T-sites and all but the fine details of the site environment geometries are the same in the two descriptions [19]).For each of the 12 distinct T-sites, there are then a total of 5 distinct modelscomprising charge compensation either by TPA + or by H + at one of the 4 bridging oxygen atom sites adjacent to the Al site.A molecular mechanics force field (developed based on first principles computations, and validated for application to zeolitic materials) was used to optimize each of the models to an energy minimum configuration under constant pressure conditions, with no assumptions of crystallographic symmetry.The endemic challenge of finding a global energy minimum configuration in a space of many local minima was addressed, where considered necessary, by using molecular dynamics to sample the configurational space.
The simulations allow sampling of a number of computed properties, such as enthalpic energy differences between the configurations with differing aluminum T-site placement, and proton position.As one potential reference to experimental data, we could also track how the computed unit cell dimensions, volume (Fig. 4), and symmetry change across the differing model configurations, and to then compare these simulated data with experimental unit cell dimension measures.Without such detailed constant pressure simulations there would be no way to predict these patterns of unit cell geometry changes.While experimental data on the unit cell dimensions of ZSM-5 materials as a function of Al content continue to be quite sparse, comparison against the full set of simulation results is consistent with a disordered distribution of aluminum across multiple T-sites in real materials, at least those accessed synthetically to date [20].

DIRECTED MATERIALS SYNTHESIS
This zeolite example serves also to highlight an immense challenge.Namely, how might we devise ways to control the architecture of a solid, of perhaps defined composition, by appropriate choice of synthesis conditions?How then might we be able to translate a model for a hypothetical material that, to the best of our knowledge and simulation methods, appears feasible into a practical instantiation?How might we extend, in some fashion, the concepts of 3-D printing to a nano or molecular scale?

HOW SMALL IS BIG ENOUGH? EXAMPLES FROM MICROFLUIDICS
Examples of public civil engineering projects from the midlate 19th century (Victorian times in the UK) are impressive.
Cell volume (Å 3 In part this impressiveness derives from the shear bulk of such structures, particular when contrasted with much leaner modern designs.Many of our experimentation set-ups, similarly, are orders of magnitude larger in sampling scale than should be necessary.For molecular properties, such as discrete optical behavior or interaction with a discrete receptor site in an enzyme, in principle we need probe only a single molecule (perhaps 0.5 zg); even at a ng level, we have a 10 12 order of redundancy (this number being equivalent to about the number of people of 100 earths).
The challenges of working with ever smaller quantities of material, though, can be daunting.Manipulation, detection and property assessments are all hard, even under optimal circumstances.And there is the caveat, of course, that in property measurement we need sample over a length scale that evidences the behavior of interest.Yet, where a set of primary properties of importance in determining performance are intrinsically molecular in nature, massive efficiency gains are promised to an HTE approach that dramatically reduces sample scales.
One route to a substantial scale reduction, microfluidics, is finding broadly expanding roles [21][22][23].Reports catching the eye recently include application to: directed evolution (where some $ 10 8 individual enzyme reactions were sampled in 10 h, using < 150 lL total reagent volume) [24]; -DNA sequence analysis [25]; rapid screening of solubility (in which nL droplets with a gradual variation in solute concentration were passed along a channel with a temperature gradient, enabling 10 points of the solubility curve to be accumulated in < 1 h and with some 250 mL of solution); screening protein crystallization conditions [26]; screening for possible salt forms of pharmaceutical compounds [27,28].An exemplary study, from 2012, uses a microfluidic configuration [29] to sample dose-response curves, in this case mapping the extent of inhibition of an enzyme as a function of the concentration of each of a library of inhibitors.The heart of the system is a sequence of microdroplets, some 150 pL in volume, each containing a set concentration of enzyme (b-galactosidase in the prototypical experiments), substrate, a concentration of inhibitor (2-PhenylEthyl b-D-ThioGalactoside (PETG)), and a reporter, DT-682 (a fluorescent encoder).For a typical inhibitor molecule concentration in the 0.1-50 lM range, the 150 pl droplet would then contain some 45 pg to 200 ng of inhibitor).
The range of inhibitor concentration in this configuration is developed by injecting a slug of inhibitor solution into a flowing fluid stream in a capillary.The initial square wave of inhibitor concentration develops, by Taylor-Aris diffusion, into a Gaussian distribution.The sampling of this capillary feed in the microdroplet development stage then leads to a sequence of droplets having initially increasing inhibitory concentration and then, on the lagging side of the Gaussian, decreasing inhibitor concentration.Within each 150 pL microdroplet reaction vessel, the constituents mix thoroughly within some milliseconds.The microfluidics circuitry includes a delay line that accommodates recording of the fluorescence signals, in separate channels for the probe and the substrate product, at one of the 10 inspection stations along the microdroplet chain (Fig. 5), yielding the extent of enzyme inhibition at the given inhibitor concentration.
At the flow rates typical of the microfluidic set-up, a single 2 mL of 2 wt% of inhibitor solution would yield 10 9 droplets.Once the system has been suitably configured a huge number of data populating the dose response curve can then be collected.For a given inhibitor, some 10 000 were in practice typically accumulated (Fig. 5), resulting in IC(50) values that are highly precise (± 2.40% at 95% confidence) and highly reproducible (CV = 2.45%, n = 16) [29].Not only do we potentially gain the HTE efficiencies, but the quality of the data is also enhanced.This point is worth underscoring.Early in the development of HTE an oftvoiced concern was that an HTE configuration would necessarily yield data inferior, in any of several ways, to those obtained with more conventional configurations.

SCREENING A PROCESSING SPACE
Beyond molecule or material discovery, HTE is being deployed in process development or optimization.One example, from earlier in 2014, explores how the specific product(s) of a protein PEGylation reaction depend on the processing conditions.Derivatization of a 'biologic' (a protein therapeutic typically administered by injection) by Poly-Ethylene Glycol (PEG) can increase solubility, reduce the rate of thermal or proteolytic protein degradation, reduce immunogenicity, and slow the rate of renal clearance.By enhancing the useful lifetime of the protein in the circulation, the therapeutic utility is substantially improved.
The two primary PEGylation routes entail an acylating reaction, such as via N-hydroxysuccinimidyl activated PEG, which targets surface lysine, histidine, or serine side chains, or an alkylating reaction, such as with PEG-aldehyde, which exclusively targets the e-amino side chains of lysine or the N-terminal a-amino group.The prototypical protein lysozyme presents 6 accessible lysine residues to a PEGylation reagent; the product of a typical PEGylation reaction then comprises a distribution of differing levels of PEG attachment ('PEGamers'), at each of the 6 accessible lysine residues (each an 'isoform').
Maiser et al. [30] sampled how changes in: protein to PEG molar ratio, buffer pH, reaction time influenced the distribution of product PEGamers, isoforms, and the enzymatic activity of the prototypical hen egg white lysozyme.
In one sense, this was a relatively simple HTE workflow, employing a fluid dispensing robot and a 96-well plate format, but it was enabled by sophisticated chromatographic methods that provided quantitation of each isoform in a given product mixture.

SAMPLING PROCESSING GEOMETRIES
In developing a complex fluid, the character of the fluid microstructure (which, as above, can be governing of properties) is developed under processing; the microstructure will usually vary depending on processing conditions, but also with changes in processing geometry (Fig. 6).For example, the microstructure of a fluid composition may vary as the geometry, orientation or position of a mixing blade in a simple overhead stirring arrangement is altered, or if a partial vortexing arrangement is instead used, or if ultrasound is applied.In early consideration of the application of HTE to fluid formulations, potential routes to sampling differing process geometries were initially considered [31], in parallel with approaches to engineering of time-profiled introduction of multiple fluid components [32,33], and of mixing and working more viscous fluids [34].These exploratory directions were superseded by challenges with property screening, but how best to sample a space of differing processing geometries continues to intrigue.

AVOIDING CHEMISTRY IN COMBINATIONS: FLUID FORMULATIONS APPLIED TO THE SKIN
In many practical applications of fluid formulations, our intent is usually to avoid chemical reactions; in fluid formulations applied to the skin, such chemistry might degrade an active ingredient, or otherwise compromise durability or performance.As a material, human skin evidences a quite special set of properties.Its barrier function, to focus on one aspect, is developed primarily by the outermost layer of the epidermis, the Stratum Corneum (SC).The SC thickness varies from individual to individual and more substantially, from body region to body region, but is it typically a mere 10-20 lm.The SC comprises layers of flattened, enucleated cells (corneocytes), connected by junctional complexes (corneosomes) and surrounded by a lipid envelope.We know the nature and the relative concentrations of the  Spaces of variables to consider in applying HTE to fluid formulations; in addition to composition and processing conditions a), varying processing method(s) and corresponding geometry (ies); b) can lead to differing microstructures and, hence, performance.
J.M. Newsam / High Throughput Experimentation (HTE) Directed to the Discovery, Characterization and Evaluation of Materials majority constituents of the lipid layers; from various imaging techniques we also have reasonable SC microstructural models.However, our molecular level understanding of the details of molecule permeation through the SC, and of how such permeation is affected by other components in a fluid formulation remain vague.For a given small molecule applied to the skin in a real fluid formulation (other than a saturated aqueous solution), we cannot predict the rate or extent of its permeation into and through the skin.We need to make measurements.
In the traditional experimental configuration for measuring skin permeation, the diffusion cell [35], a piece of skin, some 2.5 by 2.5 cm square, is mounted over a receptor well, fully filled with a solvent to ensure uniform contact with the underside of the skin piece.Clamped on top of the skin piece is a donor well into which the test formulation is introduced.Fluid samples can be abstracted from the receptor at given time intervals via a sampling arm, and then analyzed for the concentration of the active.To generate robust and reliable data requires some attention to experimental detail.A typical experimentalist might complete 20-30 such measurements per day.Given that a topical drug formulation might comprise a combination of some 5-10 components, and a beauty care formulation some 25-40, there was some motivation to devise effective HTE techniques for making such measurements [36].
The implication of 'high' in high throughput experimentation is relative.In this specific case of skin applied formulations, our goal was to achieve 10-100 fold efficiency gains over conventional methods [35].100-fold gains were achieved through using change in skin electrical impedance as a crude proxy indicator of change in skin permeability [36,37]; gains of some 10-fold were achieved through parallelization, modest scale reductions and automation [38].
This example of HTE application is chosen, though, to illustrate a final point.Even conventional measurements of permeation based on skin pieces taken from a single donor typically have standard errors of around 30%.With HTE's automation and richer sampling, we may improve overall data quality, but the complexities of a natural material like skin may impose an intrinsic limit on both experiment scale and predictability.

CONCLUSION
That HTE will play a principal role in the future materials research laboratory is a given.Yet aspects of HTE's broader role are unclear.While, by definition, directed, to what extent: can we position an HTE workflow to yield discoveries that are serendipitous, that is outside the scope for which the workflow is implemented?
how can we engineer an HTE workflow implementation in a manner that enables optimal reuse of both data and engineering components?can we ensure access to state-of-the-art systems to academic groups for both education and research? in our HTE design stages, how can we make the best informed decisions on the investment to make in simulation relative to experiment?And there are exciting opportunities for yet greater efficiencies.To probe discrete chemical properties we need sample, were we capable, at only the molecular scale; similarly, in heterogeneous catalysis we continue to work at the macroscopic scale in screening, largely because we have almost no ability to predict or control how the catalytically active centers are developed under synthesis and processing conditions.
In considering the role of HTE in the laboratory of the future, there are more macroscopic questions also.How do we better inform our understanding of the Synthesis-Processing-Properties-Performance interplay?How can microanalytics propel new experimentation efficiencies?How can we best remain abreast of developments, particularly in analytical and engineering aspects, so as to co-opt such developments efficiently for materials R&D?Where is the right balance between global cooperation and streamlined sharing of capabilities and developments, and of opaqueness to maintain a competitive differentiation at the continental, national, institutional or group level?And how can we best ensure that the next generation work force is suitably trained and, as importantly, motivated to further advance this field?

−
Directed towards specific objective − Face vast compositional landscape − Have broad space of processing options − Multiple criteria determine 'performance' − Optimum (optima) not predictable

Figure 2 a
Figure 2 a) A pseudoternary section of the (Ni-Fe-Co-Ce)O x electrocatalyst space explored by Haber et al. [17], b) with the overpotential at 10 mA.cm À2 for the library of such pseudoternary compositions and c) the catalytic current extracted from the cyclic voltammetry measurements for the high-Ce and low-Ce catalysts (after Haber et al. [17] with permission of the authors).
J.M. Newsam / High Throughput Experimentation (HTE) Directed to the Discovery, Characterization and Evaluation of Materials

Figure 3 a
Figure 3 a) Schematic of the flow cell of Huskinson et al. [18] discharge mode is shown; in electrolytic/charge mode the arrows are reversed; AQDSH 2 refers to the reduced form of AQDS; b) calculated reduction potentials of AQDS substituted variously with -OH groups (black), together with calculated (blue) and experimental values (red) for AQDS and DHAQDS (reproduced with permission of the authors [18]).

Figure 4 a
Figure 4 a) T-site numbering in the asymmetric unit of the MFI-framework (orthorhombic description; the mirror plane generating the asymmetric unit of the monoclinic structure contains the oxygen atoms labeled with *); b) computed cell volume changes versus Al-substitution site for H-ZSM-5 and TPA-ZSM-5 with 4 Al per unit cell, relative to the MFI-framework (SiO 2 composition: V = 5 724.0 Å 3 ) optimized under the same conditions (reproduced with permission of the authors [20]).

Figure 5 a
Figure 5 a) Design of the microfluidic device used by Miller et al. [29] (dotted black arrows show the route of droplets through the device); b) scatter plot of measured percentage inhibition against PETG concentration (from a total of 11 113 droplets -blue dots) with a corresponding fourparameter Hill function fit (black line) (reproduced with permission from the authors [29]).